Next Article in Journal
Getting Ahead of the Arms Race: Hothousing the Coevolution of VirusTotal with a Packer
Previous Article in Journal
Auto- versus Cross-Correlation Noise in Periodically Driven Quantum Coherent Conductors
Article

Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media

by *,† and *,†
Complex Adaptive Systems Lab, Department of Computer Science, University of Central Florida, Orlando, FL 32816, USA
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2021, 23(4), 394; https://doi.org/10.3390/e23040394
Received: 20 February 2021 / Revised: 10 March 2021 / Accepted: 23 March 2021 / Published: 26 March 2021
With the online presence of more than half the world population, social media plays a very important role in the lives of individuals as well as businesses alike. Social media enables businesses to advertise their products, build brand value, and reach out to their customers. To leverage these social media platforms, it is important for businesses to process customer feedback in the form of posts and tweets. Sentiment analysis is the process of identifying the emotion, either positive, negative or neutral, associated with these social media texts. The presence of sarcasm in texts is the main hindrance in the performance of sentiment analysis. Sarcasm is a linguistic expression often used to communicate the opposite of what is said, usually something that is very unpleasant, with an intention to insult or ridicule. Inherent ambiguity in sarcastic expressions make sarcasm detection very difficult. In this work, we focus on detecting sarcasm in textual conversations from various social networking platforms and online media. To this end, we develop an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text. We show the effectiveness of our approach by achieving state-of-the-art results on multiple datasets from social networking platforms and online media. Models trained using our proposed approach are easily interpretable and enable identifying sarcastic cues in the input text which contribute to the final classification score. We visualize the learned attention weights on a few sample input texts to showcase the effectiveness and interpretability of our model. View Full-Text
Keywords: sarcasm detection; self-attention; interpretability; social media analysis sarcasm detection; self-attention; interpretability; social media analysis
Show Figures

Figure 1

MDPI and ACS Style

Akula, R.; Garibay, I. Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media. Entropy 2021, 23, 394. https://doi.org/10.3390/e23040394

AMA Style

Akula R, Garibay I. Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media. Entropy. 2021; 23(4):394. https://doi.org/10.3390/e23040394

Chicago/Turabian Style

Akula, Ramya, and Ivan Garibay. 2021. "Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media" Entropy 23, no. 4: 394. https://doi.org/10.3390/e23040394

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop