Next Article in Journal
Loyalty in Islam: A Critical Survey
Next Article in Special Issue
Application of Machine Learning Models in Social Sciences: Managing Nonlinear Relationships
Previous Article in Journal
Geomasking to Safeguard Geoprivacy in Geospatial Health Data
Previous Article in Special Issue
Educational Constructivism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Entry

Social Media Sentiment Analysis

by
Joyce Y. M. Nip
1,* and
Benoit Berthelier
2
1
Discipline of Media and Communications, School of Art, Communication and English, Faculty of Arts and Social Sciences, The University of Sydney, Sydney, NSW 2006, Australia
2
Discipline of Korean Studies, Faculty of Arts and Social Sciences, The University of Sydney, Sydney, NSW 2006, Australia
*
Author to whom correspondence should be addressed.
Encyclopedia 2024, 4(4), 1590-1598; https://doi.org/10.3390/encyclopedia4040104
Submission received: 11 July 2024 / Revised: 14 October 2024 / Accepted: 16 October 2024 / Published: 21 October 2024
(This article belongs to the Collection Encyclopedia of Social Sciences)

Definition

:
Social media sentiment analysis is the computational detection and extraction of human subjective evaluation of objects embedded on social media. Previous sentiment analysis was conducted on isolated written texts, and typically classified sentiment into positive, negative, and neutral states. Social media sentiment analysis has included multi-modal texts, temporal dynamics, interactions, network relationships, and sentiment propagation. Specific emotions and sentiment intensity are also detected.

1. Introduction

Sentiment analysis began as a field in computer science and has since extended into social sciences and management studies. Because emotions, cognition, and behavior are interconnected, sentiment analysis can help researchers to understand individual attitudes and predict human behaviors, as well as inform remedial and preventive actions at the individual and societal levels. Sentiment can be taken to refer to the feeling that underlies an expressed positive or negative opinion or the feeling implied by a neutral opinion. It is therefore also called opinion mining [1]. As summed up by [2], a feeling is “a sensation that has been checked against previous experiences and labelled” while an emotion is “the projection/display of a feeling”. This entry focuses on social media sentiment analysis, a field that has grown in prominence within both academia and industry due to the increasing prevalence of social media in everyday life. It explains its genesis, discusses its applications, identifies its types, features and approaches. It concludes with a discussion on emerging developments in Large Language Models and future challenges.

2. Genesis

Social media sentiment analysis develops from earlier sentiment analysis. The paper “Thumbs up? Sentiment classification using machine learning techniques” published by [3] and cited over 12,000 times (according to Google Scholar) at the time of writing, is widely acknowledged to have established sentiment analysis as a field in computational studies, although the paper did not contain the term as such. Ref. [3]’s paper cited works published as early as 1994, which used natural language processing (NLP), machine learning and computational linguistics to categorize texts and differentiated objective and subjective sentences. Ref. [3]’s study departed from most previous works by using “prior-knowledge-free” supervised machine learning algorithms, i.e., not involving manually crafted rules or domain-specific sentiment lexicons. Their study found that although prior-knowledge-free supervised machine learning algorithms were able to classify the topics of documents better than humans, their classification of the sentiment of movie reviews (positive or negative) was not as good [3]. Ref. [4]’s classification of four types of user reviews pioneered the methods of sentiment analysis in another way by using an unsupervised machine learning algorithm.
Sentiment analysis in the early 2000s was boosted by developments in lexical databases, such as WordNet, initially created in 1985 for the English language and now available in more than 200 languages, and LIWC (Linguistic Inquiry and Word Count), a text analysis software based on a dictionary of categorized words, first released in 1997 in English and now available in more than 10 languages. The social media environment supports the production and distribution of texts in new ways and pushes sentiment analysis on social media to new frontiers (discussed below). However, the social media environment also brings challenges in the volume and speed of data generation, multiplicity of texts, fragmented contexts, and data noise. Recent developments in artificial intelligence, and particularly deep learning-based methods, help enhance analysis tools and operation but also threaten to distort the authenticity of sentiment extracted from social media.

3. Applications

Sentiment analysis was initially applied to online product reviews on platforms like Amazon and eBay in the early and mid-2000s. The positive and negative feedback was used to summarize customer opinions and help identify trends. Since then, the technology has been adopted across a wide range of domains in both industry and academia.
In politics, for instance, sentiment analysis started to be adopted by campaign managers and integrated to certain pollsters’ models in the early 2010s. The 2008 Obama campaign is famously known for drawing upon social media analysis, coupling user interaction metrics with geolocation to target their electorate, but it seems that it was only in his 2012 campaign that social media sentiment analysis was used to monitor the tone of Tweets to help the campaign [5]. Sentiment analysis has also been used by governments to gauge popular reactions to certain issues and to design and target public information or propaganda campaigns more effectively. In China, the formation of the Bureau of “Yuqing” (Public Intelligence) Information in the Propaganda Department of the Chinese Communist Party (CCP) in 2004 prompted the creation of a large industry of public opinion analysis to monitor citizens’ online discourse and sentiment [6]. Sentiment analysis based on extensive social media data helps various levels of Party and government agencies to forecast and forestall protest events and better target its propaganda efforts [7,8]. Beyond China, sentiment analysis is also widely used by states as part of their social media intelligence (SCOMINT) efforts to police and surveil their citizens’ behavior. In the United States, the Department of Homeland Security, among others, uses sentiment analysis to monitor social media [9], as do various government and law enforcement agencies in the United Kingdom [10] and around the world [11]. Social media sentiment analysis also informs policy. The European Union has funded its own public sentiment monitoring platforms for policymakers [12,13].
However, social media sentiment analysis is highly reflexive. In China, a 2022 government directive on content curation algorithms requires them to promote positive content over negative opinions [14,15]. A notable side effect of this manufacture of positivity is that social media sentiment monitoring may over time become less accurate and less predictive as it ends up merely reflecting the state-mandated preferences of curation algorithms. This peculiar aspect of the technology is particularly salient in its applications in commercial domains like marketing and finance. As the popularity of online image monitoring for brands and products increased throughout the 2010s, so did attempts to manipulate online perceptions through sponsored or fake content [16]. The reliability of sentiment analysis-based metrics has been affected as a result, leading to efforts to detect and isolate “authentic” opinions using artificial intelligence and heuristics [17].
In finance, sentiment analysis has become widely known since [18]’s seminal work that showed the impact of news sentiment on market volatility. The Bloomberg terminal, for instance, one of the most commonly used tools in the financial community, has included sentiment metrics sourced from the news since 2010, before later expanding to social media sentiment [19]. However, sentiment analysis comes with multiple limitations. It is likely to be a lagging, rather than leading, indicator. Furthermore, financial strategies are subject to alpha decay: the more public it becomes, the less returns it generates. This means that while there are a considerable amount of publicly available academic papers purporting to predict market movements based on sentiment analysis [20], there is little chance of them having practical value. While some firms in the financial industry reportedly do use sentiment analysis, it is likely to be part of larger, complex and unlikely to be disclosed models.

4. Types

The affordances [21,22,23,24,25,26] of social media have created new objects of study for social media sentiment analysis. However, there is no universal agreement on what defines social media. Some scholars consider online product review sites (e.g., Yelp) as a type of social media [27,28]. However, these sites are usually weak in the interaction dimension, one of four dimensions commonly considered to define social media: (1) ordinary individuals can be content creators, (2) ordinary individuals can be content distributors, (3) communication is interactive, and (4) the mode of communication is many-to-many [27,28,29,30,31,32,33,34,35]. Many people associate social media with social networking services, but they are only one type of social media. Some definitions of social media include a specific technology, for example, web 2.0 [27] or the Internet [30], while most definitions assume that digital communication technologies provide the infrastructure of communication. Popular types of social media include blogging platforms (e.g., WordPress, Medium), microblogging platforms (e.g., X–formerly Twitter, Weibo), photo sharing sites (e.g., Instagram), video sharing sites (e.g., YouTube, TikTok), social networking sites (e.g., Facebook, LinkedIn), messaging apps (e.g., WhatsApp, WeChat), live streaming platforms (e.g., Twitch, Douyin), discussion forums (e.g., Reddit, Baidu Tieba), and wikis (e.g., Wikipedia). X has been researched the most in social media sentiment analysis. An advanced keyword search on Google Scholar on 9 July 2024 for “allintitle: sentiment Twitter” returned 5160 results, far more than YouTube, which came in second with 259 results.
The affordances of social media platforms vary, but the field of social media sentiment analysis is generally distinguished from traditional sentiment analysis by encompassing the following additional types of analysis:
(1)
Multi-modal analysis. Many social media platforms support multi-modal communication, incorporating text, images, videos and sometimes audio. Each media mode offers distinct sentiment cues, and multi-modal analysis integrates these diverse data types to derive a richer, more accurate sentiment assessment. For example, textual analysis might interpret a social media post as neutral, but the inclusion of a humorous sound or a sarcastic meme could shift that interpretation to negative or positive, respectively. However, most social media sentiment analyses still focus on written texts.
(2)
Temporal dynamics. The dynamic nature of social media means that sentiments can fluctuate rapidly in response to world events, trends or the viral spread of particular content. Temporal analysis tracks these changes over time, enabling researchers to observe the stability and evolution of sentiments within a community and to identify or interpret causal relationships between synchronously unfolding events.
(3)
Interaction analysis. Texts that respond to each other in the flow of conversation on social media enable interpretation of causal relationships between sentiments expressed in the posts, comments and replies. Interaction analysis helps to clarify the amplification of sentiments.
(4)
Network relationships. On social media platforms that support networked communication, analyzing the interaction of sentiments around an issue helps clarify how sentiments cluster within networks and identify influencers.
(5)
Sentiment propagation. Combining the analysis of temporal dynamics and network relationships enables mapping the propagation of sentiments across multiple social networks over time, which is crucial for studying phenomena like viral content or the spread of misinformation.

5. Features

The social media environment poses specific challenges to sentiment analysis:
(1)
Volume and speed of data generation. Deals with vast and continuously growing streams of data generated by millions of users on social media platforms. The volume of data generation requires high-capacity computational tools.
(2)
Multiplicity of texts. Involves texts of varied medium, language, style, structure and length. A post can be written in more than one language or dialect. The style can range from highly formal to highly informal, involving, for example, abbreviations, emojis, slang and profane language. Rhetorical devices such as sarcasm and irony communicate a sentiment opposite to the literal meaning of the words. The text can be as short as one emoji or one word or it can be a lengthy, well-structured argumentation.
(3)
Fragmented contexts. Social media posts often reference external events, previous conversations or shared cultural knowledge that may not be explicitly stated within the text, making it difficult for sentiment analysis systems to accurately interpret the full meaning and emotional tone. Furthermore, different platforms have different features (threads, reactions, hashtags, retweets, etc.) that complexify the attribution and identification of sentiment.
(4)
Data noise. Some social media platforms allow certain types of accounts to include automated promotion materials in their posts, creating noise that needs to be identified and discarded from relevant information. Fake content generated by bots is another source of data noise that can skew the analysis.
Despite the significant volume and speed of data generation on private messaging apps, most of such data is not publicly accessible for analysis. The multiplicity of texts is the most prominent on large-scale platforms catered to general-purpose public communication managed with less moderation, compared to more private platforms, platforms catered to professional communication and platforms managed according to stricter rules.

6. Approaches

Over the past decade, a number of new approaches to sentiment analysis have emerged to tackle some of the challenges posed by social media affordances:
(1)
Deep learning-based approaches. By the 2010s, state-of-the-art results in natural language processing were increasingly achieved by deep learning models, and sentiment analysis was no exception [36]. The use of convolutional (CNN) and recurrent (RNN) neural networks for text analysis and the mathematical representation of words (embeddings) enables the capture of the semantic and syntactic features of words. This improved capability allows for better handling of synonymy, spelling variations, tone shifts and implicit sentiment [37]. Early sentiment models considered words in isolation or in groups of 2 or 3 and lacked context awareness. For instance, they would handle words with slightly different spellings such as “favorite” and “favourite” as entirely distinct, while embeddings would encode them as being almost equivalent. Furthermore, earlier models that relied solely on words irrespective of their relative position in the sentence (bag-of-words approach) would struggle to handle negative turns of phrases such as “I do not think that it was so bad at all”. By contrast, deep learning models are able to consider the entire sequence to understand that the sentiment may be more nuanced than what the sole presence of the word “bad” could initially suggest. Word embeddings also facilitated the handling of multiple languages. Their popularization in the mid-2010s was followed by a sharp increase in multilingual sentiment analysis studies of social media [38]. These approaches also paved the way for the large language models (LLM)-based approaches described further below.
(2)
Aspect based sentiment analysis (ABSA). Sentiments expressed in a social media post can be directed at specific and distinct aspects of an idea, event, product or service. Early sentiment analysis techniques indiscriminately aggregated those opinions in a single sentiment score, applied at various levels of granularity (sentence, paragraph, document). In contrast, ABSA tries to identify the different aspect terms mentioned in a text before gauging the sentiment polarity of the judgments expressed with regard to each aspect [39]. Identifying aspect terms and matching them to the expression of sentiment are both complex tasks with their own challenges, independent of the prediction of sentiment polarity. ABSA was originally applied primarily to customer reviews to identify what specific features of a product customers liked or disliked. For instance, in a product review for a smartphone, a user might write: “The camera quality is excellent, but the battery life is disappointing”. ABSA would identify two aspects (camera quality and battery life) and assign positive sentiment to the former and negative sentiment to the latter. ABSA has also been successfully applied to the study of political topics and social issues. For instance, [40] used ABSA to analyze public sentiment about COVID-19 based on 170 different aspects, ranging from “side effects” to “vaccine campaign”.
(3)
Multimodal sentiment analysis. Sentiment analysis for social media has long been dominated by textual approaches, as they are simpler to implement and less computationally expensive. Yet social media data often comes in a mix of text, image, sound and video, and increasingly favors the visual over the textual. Traditionally, multimodal sentiment analysis combined several media-specific models and aggregated their results into a final sentiment score [41]. In analyzing a video post, separate models would process the spoken words (through speech-to-text conversion), facial expressions and gestures (with a visual model) and any accompanying text, such as comments and captions (with a textual model). Each model would provide its own sentiment score, which would then be combined using various fusion techniques. Early approaches often used simple averaging or weighted sum methods, while more sophisticated techniques employed machine learning algorithms to learn optimal fusion strategies [42]. The rise of multimodal sentiment analysis has been particularly important for platforms like Instagram, TikTok and YouTube, where visual content plays a crucial role. For example, ref. [43] developed a multimodal approach to analyzing sentiment in YouTube videos combining visual and textual features. Their approach significantly improved sentiment classification accuracy compared to methods that only used either textual or visual models. However, as the latest generation of LLMs shows, AI models are increasingly becoming multimodal by default, making this form of sentiment analysis more accessible [44]. These multimodal LLMs are trained on vast amounts of paired text-image data, allowing them to develop a unified understanding of concepts across modalities. This integrated approach potentially offers more nuanced and context-aware sentiment analysis, as the model can inherently consider the interplay between different modes of expression.
(4)
Knowledge graphs and domain specific analysis. Sentiment interpretation is highly context-dependent. At the same time, social media discussions often involve domain-specific terminology, cultural references or current events that require contextual knowledge. Knowledge graphs provide a structured representation of entities, concepts and their relationships, allowing for the incorporation of domain-specific knowledge into the sentiment analysis process. This approach can help resolve ambiguities and identify implicit sentiments in complex or specialized contexts. In specialized fields like healthcare, knowledge graphs can provide context for technical terms that might be neutral in general language but carry sentiment in a specific domain. In particular, the integration of knowledge graphs with language models such as BERT, or, more recently, GPT, has proven very effective [45,46].

7. Large Language Models and Future Directions

Large Language Models (LLMs) such as GPT have drastically altered the landscape of sentiment analysis due to their deep contextual understanding capabilities. Trained on vast corpora of data, they can grasp subtle linguistic cues, handle complex sentence structures, and work across language and media types. Due to their extensive training on large datasets encompassing a significant part of the content available on the web, LLMs come with extensive “prior knowledge”, enabling them to contextualize information without needing additional inputs. LLMs are exceptionally adaptable, capable of performing sentiment analysis with little to no training or additional knowledge of domain-specific tasks. This adaptability contrasts with previous generations of AI models, which were rarely generalists. In earlier models, improving performance in a specific domain, such as sentiment analysis on social media, typically required training with manually annotated data or fine-tuning with large amounts of unannotated domain-related data. On the other hand, newer LLMs like GPT are “general purpose solvers” [47] that do not benefit—and sometimes actually perform worse—from such techniques. Recent studies also show that out-of-the-box ChatGPT achieves comparable performance to state-of-the-art sentiment analysis models [48,49,50].
LLM performance can be further significantly improved by using prompt engineering techniques—a practice that involves strategically designing the inputs (or prompts) given to an LLM to elicit more accurate or contextually appropriate responses. For instance, the Chain of Thought prompting technique [51] improves model performance by encouraging the LLM to process information in a stepwise, logical manner, mimicking human reasoning patterns. This approach not only enhances the clarity and relevance of the responses but also makes the decision-making process of the model more transparent and understandable. Additionally, the use of multi-agent systems, where several AI agents interact, critique and refine each other’s outputs [52], introduces a layer of collaborative learning and self-correction. This method can be situated somewhere between the peer-review process used in human-centered tasks and the ensembling methods used to aggregate the output of multiple different models. Multi agent setups increase the reliability and accuracy of the models by allowing them to learn from and adjust to each other’s analyses. Yet, while LLMs offer great promise for sentiment analysis, they also come with their own specific set of limitations and challenges.
(1)
LLMs are stochastic. This means that the prompt will give close but not similar outputs. This variability poses significant challenges for scientific reproducibility and consistency in sentiment analysis applications. To mitigate this, researchers often resort to averaging results over multiple runs or using techniques like setting the “temperature” (the parameter that controls how much a model is allowed to deviate from the most probable outputs) to the minimum level. However, these approaches add complexity and computational overhead to the analysis process.
(2)
LLMs are black boxes. The inner workings of these models are not transparent, largely because the best-performing models are proprietary and closed-source. Even with open-source versions, the complexity of the model architectures—often consisting of billions of parameters—obscures the rationale behind specific outputs. This opacity makes it challenging for users to debug or improve the models based on specific application needs and hinders the ability to ensure that the models are making decisions for the right reasons. For example, if an LLM misclassifies the emotional tone of a product review, developers would struggle to pinpoint the reason without a clear understanding of the model’s decision-making process. By contrast, earlier models that would give individual words a single sentiment score were very easy to interpret.
(3)
LLM outputs are hard to evaluate. As a direct corollary of the point above, the lack of knowledge about the data on which the models were trained makes it difficult to evaluate their performance. It is challenging to determine whether they are outperforming previous models because they are more effective or because the data were already included in their extensive training data.
(4)
LLMs are biased. Commercial LLMs undergo an alignment process using reinforcement learning from human feedback (RHLF) to mitigate the biases of their training data, avoid hate speech and refuse to comply with unethical prompts. By trying to sanitize outputs to avoid generating harmful content, these models may become overly conservative, missing nuanced or contextually specific sentiments. It has indeed been noted that this process may be responsible for degrading LLMs’ performance on a variety of tasks, from hate speech detection to sentiment analysis [53]. But biases in LLMs are subtle and context-dependent, making them difficult to detect and correct comprehensively.

8. Conclusions

Beyond the aforementioned weaknesses and limitations, the predictive power of social media analysis often depends on combining user-generated content and user interactions on social media with other data sources, such as user demographics and geolocation. However, following the Cambridge Analytica controversy in 2018, open access to social media data has become more restricted due to concerns over privacy and consent. These restrictions are likely to affect academic research more than industrial research, given the stricter ethical compliance requirements in academic institutions. A common challenge faced by both academic and industrial research, however, is the presence of fake content generated either by fabricated human accounts or, increasingly, by AI. This issue is rarely acknowledged, except in studies focusing on bot detection. If social media sentiment analysis continues to assume that social media content represents genuine human opinions, its predictive power will not improve, even if advances in AI help to address previous analytical limitations.

Author Contributions

Conceptualization, J.Y.M.N.; investigation, J.Y.M.N. and B.B.; resources, authors’ personal archives, ChatGPT 4o, ChatGPT 4, Claude 3.5 Sonnet 20240620, Google Scholar, the University of Sydney Library, the World Wide Web; writing—original draft preparation, J.Y.M.N. and B.B.; writing—review and editing, J.Y.M.N. and B.B.; supervision, J.Y.M.N.; project administration, J.Y.M.N.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors acknowledge the use of generative artificial intelligence tools ChatGPT 4o, ChatGPT 4, and Claude 3.5 Sonnet 20240620 to conduct background research, identify relevant research publications, and to identify grammatical errors and unclear expressions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, B. Sentiment Analysis: Mining Opinions, Sentiments, and Emotions; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
  2. Shouse, E. Feeling, emotion, affect. M/C J. 2005, 8. [Google Scholar] [CrossRef]
  3. Pang, B.; Lee, L.; Vaithyanathan, S. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP), Philadelphia, PA, USA, 6–7 July 2002; pp. 79–86. [Google Scholar]
  4. Turney, P.D. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. arXiv 2002, arXiv:cs/0212032. [Google Scholar]
  5. Issenberg, S. How Obama’s Team Used Big Data to Rally Voters. MIT Technology Review. 9 December 2012. Available online: https://www.technologyreview.com/2012/12/19/114510/how-obamas-team-used-big-data-to-rally-voters/ (accessed on 15 October 2024).
  6. Hu, Y. From Yulun (Public Opinion) to Yuqing (Public Intelligence): Their History and Practice in China’s Information Management. In The Routledge Companion to Global Internet Histories; Goggin, G., McLelland, M., Eds.; Routledge: London, UK, 2017; pp. 538–550. [Google Scholar]
  7. Hoffman, S. Engineering Global Consent: The Chinese Communist Party’s Data-Driven Power Expansion. Australian Strategic Policy Institute. 2019. Available online: https://ad-aspi.s3.ap-southeast-2.amazonaws.com/2019-10/Engineering%20global%20consent%20V2.pdf (accessed on 17 October 2024).
  8. Thorne, D. Evaluating the Utility of Global Data Collection by Chinese Firms for Targeted Propaganda. Jamestown Foundation. 2020. Available online: https://jamestown.org/program/evaluating-the-utility-of-global-data-collection-by-chinese-firms-for-targeted-propaganda (accessed on 17 October 2024).
  9. Patel, F.; Levinson-Waldman, R.; DenUyl, S.; Koreh, R. Social Media Monitoring: How the Department of Homeland Security Uses Digital Data in the Name of National Security. Brennan Center for Justice. 22 May 2019. Available online: https://www.brennancenter.org/sites/default/files/2019-08/Report_Social_Media_Monitoring.pdf (accessed on 17 October 2024).
  10. Wieshmann, H.; Davies, M.; Sugg, O.; Davis, S.; Ruda, S. Violence in London: What We Know and How to Respond. A Report Commissioned by the Mayor of London’s Violence Reduction Unit. Greater London Authority. 2020. Available online: https://images.london.gov.uk/m/2f62d5c4172448aa/original/Violence-in-London-what-we-know-and-how-to-respond.pdf (accessed on 17 October 2024).
  11. Gohdes, A.R. Repression Technology: Internet Accessibility and State Violence. Am. J. Politi. Sci. 2020, 64, 488–503. [Google Scholar] [CrossRef]
  12. AI4PublicPolicy. Project Information. AI4PublicPolicy. 2020. Available online: https://ai4publicpolicy.eu/project-info/ (accessed on 17 October 2024).
  13. Souri, N. Cutting-Edge WeGov Software Solution Supporting Policy-Makers in the Analysis of SNS. European Commission. Available online: https://joinup.ec.europa.eu/collection/eparticipation-and-evoting/news/cutting-edge-wegov-software-s (accessed on 17 October 2024).
  14. Feng, E. Why the Chinese Government Wants More Feel-Good Stories Posted Online. NPR. 10 January 2022. Available online: https://www.npr.org/2022/01/10/1071766938/why-the-chinese-government-wants-more-feel-good-stories-posted-online (accessed on 17 October 2024).
  15. Wang, J. Platform Responsibility with Chinese Characteristics. Digital Planet, Tufts University. 2022. Available online: https://digitalplanet.tufts.edu/wp-content/uploads/2023/02/DD-Report_1-Jufang-Wang-11.30.22.pdf (accessed on 17 October 2024).
  16. Shen, R.-P.; Liu, D.; Wei, X.; Zhang, M. Your posts betray you: Detecting influencer-generated sponsored posts by finding the right clues. Inf. Manag. 2022, 59, 103719. [Google Scholar] [CrossRef]
  17. Wu, Y.; Ngai, E.W.; Wu, P.; Wu, C. Fake online reviews: Literature review, synthesis, and directions for future research. Decis. Support Syst. 2020, 132, 113280. [Google Scholar] [CrossRef]
  18. Engle, R.F.; Ng, V.K. Measuring and Testing the Impact of News on Volatility. J. Financ. 1993, 48, 1749–1778. [Google Scholar] [CrossRef]
  19. Bloomberg. Finding Novel Ways to Trade on Sentiment Data. Bloomberg. 14 June 2017. Available online: https://www.bloomberg.com/company/stories/finding-novel-ways-trade-sentiment-data (accessed on 17 October 2024).
  20. Du, K.; Xing, F.; Mao, R.; Cambria, E. Financial Sentiment Analysis: Techniques and Applications. ACM Comput. Surv. 2024, 56, 1–42. [Google Scholar] [CrossRef]
  21. Bucher, T.; Helmond, A. The affordances of social media platforms. SAGE Handb. Soc. Media 2018, 1, 233–254. [Google Scholar]
  22. Gaver, W.W. Technology affordances. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New Orleans, LO, USA, 27 April–2 May 1991; pp. 79–84. [Google Scholar]
  23. Gibson, J.J. The Ecological Approach to Visual Perception; Taylor & Francis: Abingdon, UK, 1979; pp. 119–135. [Google Scholar]
  24. Graves, L. The affordances of blogging: A case study in culture and technological effects. J. Commun. Inq. 2007, 31, 331–346. [Google Scholar] [CrossRef]
  25. Norman, D.A. Affordance, conventions, and design. Interactions 1999, 6, 38–43. [Google Scholar] [CrossRef]
  26. Ronzhyn, A.; Cardenal, A.S.; Rubio, A.B. Defining affordances in social media research: A literature review. New Media Soc. 2023, 25, 3165–3188. [Google Scholar] [CrossRef]
  27. Kaplan, A.M.; Haenlein, M. Users of the world, unite! The challenges and opportunities of Social Media. Bus. Horiz. 2010, 53, 59–68. [Google Scholar] [CrossRef]
  28. Zarrella, D. The Social Media Marketing Book; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2009. [Google Scholar]
  29. Bechmann, A.; Lomborg, S. Mapping actor roles in social media: Different perspectives on value creation in theories of user participation. New Media Soc. 2013, 15, 765–781. [Google Scholar] [CrossRef]
  30. Carr, C.T.; Hayes, R.A. Social media: Defining, developing, and divining. Atl. J. Commun. 2015, 23, 46–65. [Google Scholar] [CrossRef]
  31. Hansen, D.; Shneiderman, B.; Smith, M.A. Analyzing Social Media Networks with NodeXL: Insights from a Connected World, 2nd ed.; Morgan Kaufmann: San Francisco, CA, USA, 2019. [Google Scholar]
  32. Hogan, B.; Quan-Haase, A. Persistence and Change in Social Media. Bull. Sci. Technol. Soc. 2010, 30, 309–315. [Google Scholar] [CrossRef]
  33. Howard, P.N.; Parks, M.R. Social Media and Political Change: Capacity, Constraint, and Consequence. J. Commun. 2012, 62, 359–362. [Google Scholar] [CrossRef]
  34. Kietzmann, J.H.; Hermkens, K.; McCarthy, I.P.; Silvestre, B.S. Social media? Get serious! Understanding the functional building blocks of social media. Bus. Horiz. 2011, 54, 241–251. [Google Scholar] [CrossRef]
  35. Lewis, B.K. Social media and strategic communication: Attitudes and perceptions among college students. Public Relat. J. 2010, 4, 1–23. [Google Scholar]
  36. Zhang, L.; Wang, S.; Liu, B. Deep Learning for Sentiment Analysis: A Survey. arXiv 2018. [Google Scholar] [CrossRef]
  37. Dashtipour, K.; Poria, S.; Hussain, A.; Cambria, E.; Hawalah, A.Y.A.; Gelbukh, A.; Zhou, Q. Multilingual Sentiment Analysis: State of the Art and Independent Comparison of Techniques. Cogn. Comput. 2016, 8, 757–771. [Google Scholar] [CrossRef]
  38. Agüero-Torales, M.M.; Salas, J.I.A.; López-Herrera, A.G. Deep learning and multilingual sentiment analysis on social media data: An overview. Appl. Soft Comput. 2021, 107, 107373. [Google Scholar] [CrossRef]
  39. Zhang, W.; Li, X.; Deng, Y.; Bing, L.; Lam, W. A Survey on Aspect-Based Sentiment Analysis: Tasks, Methods, and Challenges. IEEE Trans. Knowl. Data Eng. 2022, 35, 11019–11038. [Google Scholar] [CrossRef]
  40. Jang, H.; Rempel, E.; Roth, D.; Carenini, G.; Janjua, N.Z. Tracking COVID-19 Discourse on Twitter in North America: Infodemiology Study Using Topic Modeling and Aspect-Based Sentiment Analysis. J. Med Internet Res. 2021, 23, e25431. [Google Scholar] [CrossRef]
  41. Poria, S.; Cambria, E.; Bajpai, R.; Hussain, A. A review of affective computing: From unimodal analysis to multimodal fusion. Inf. Fusion 2017, 37, 98–125. [Google Scholar] [CrossRef]
  42. Soleymani, M.; Garcia, D.; Jou, B.; Schuller, B.; Chang, S.-F.; Pantic, M. A survey of multimodal sentiment analysis. Image Vis. Comput. 2017, 65, 3–14. [Google Scholar] [CrossRef]
  43. Rosas, V.P.; Mihalcea, R.; Morency, L.-P. Multimodal Sentiment Analysis of Spanish Online Videos. IEEE Intell. Syst. 2013, 28, 38–45. [Google Scholar] [CrossRef]
  44. Yin, S.; Fu, C.; Zhao, S.; Li, K.; Sun, X.; Xu, T.; Chen, E. A Survey on Multimodal Large Language Models. arXiv 2024, arXiv:2306.13549. [Google Scholar]
  45. Liu, W.; Zhou, P.; Zhao, Z.; Wang, Z.; Ju, Q.; Deng, H.; Wang, P. K-BERT: Enabling Language Representation with Knowledge Graph. Proc. AAAI Conf. Artif. Intell. 2019, 34, 2901–2908. [Google Scholar] [CrossRef]
  46. Yang, L.; Chen, H.; Li, Z.; Ding, X.; Wu, X. Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling. IEEE Trans. Knowl. Data Eng. 2024, 99, 3091–3110. [Google Scholar] [CrossRef]
  47. Li, X.; Chan, S.; Zhu, X.; Pei, Y.; Ma, Z.; Liu, X.; Shah, S. Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text Analytics? A Study on Several Typical Tasks. arXiv 2023, arXiv:2305.05862. [Google Scholar]
  48. Deng, X.; Bashlovkina, V.; Han, F.; Baumgartner, S.; Bendersky, M. LLMs to the Moon? Reddit Market Sentiment Analysis with Large Language Models. In Proceedings of the Companion Proceedings of the ACM Web Conference, Austin, TX, USA, 30 April–4 May 2023; pp. 1014–1019. [Google Scholar]
  49. Wang, Z.; Xie, Q.; Feng, Y.; Ding, Z.; Yang, Z.; Xia, R. Is ChatGPT a good sentiment analyzer? A preliminary study. arXiv 2024, arXiv:2304.04339. [Google Scholar]
  50. Zhong, Q.; Ding, L.; Liu, J.; Du, B.; Tao, D. Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT. arXiv 2023, arXiv:2302.10198. [Google Scholar]
  51. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Ichter, B.; Xia, F.; Chi, E.; Le, Q.; Zhou, D. Chain of Thought Prompting Elicits Reasoning in Large Language Models. arXiv 2022, arXiv:2201.11903. [Google Scholar]
  52. Xing, F. Designing Heterogeneous LLM Agents for Financial Sentiment Analysis. arXiv 2024, arXiv:2401.05799. [Google Scholar] [CrossRef]
  53. Zhang, W.; Deng, Y.; Liu, B.; Pan, S.; Bing, L. Sentiment Analysis in the Era of Large Language Models: A Reality Check. arXiv 2023, arXiv:2305.15005. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nip, J.Y.M.; Berthelier, B. Social Media Sentiment Analysis. Encyclopedia 2024, 4, 1590-1598. https://doi.org/10.3390/encyclopedia4040104

AMA Style

Nip JYM, Berthelier B. Social Media Sentiment Analysis. Encyclopedia. 2024; 4(4):1590-1598. https://doi.org/10.3390/encyclopedia4040104

Chicago/Turabian Style

Nip, Joyce Y. M., and Benoit Berthelier. 2024. "Social Media Sentiment Analysis" Encyclopedia 4, no. 4: 1590-1598. https://doi.org/10.3390/encyclopedia4040104

APA Style

Nip, J. Y. M., & Berthelier, B. (2024). Social Media Sentiment Analysis. Encyclopedia, 4(4), 1590-1598. https://doi.org/10.3390/encyclopedia4040104

Article Metrics

Back to TopTop