Next Article in Journal
Editorial Introduction to Technological Approaches for the Treatment of Mental Health in Youth
Previous Article in Journal
The Socioeconomic Integration of People in Need of International Protection: A Spatial Approach in the Case of Greece
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Out-of-Place Content: How Repetitive, Offensive, and Opinion-Challenging Social Media Posts Shape Users’ Unfriending Strategies in Spain

1
Department of Communication, Carlos III University, Getafe, 28903 Madrid, Spain
2
Faculty of Communication, University of Castilla-La Mancha, 16071 Cuenca, Spain
3
Centre for Social Sciences, Hungarian Academy of Sciences Centre of Excellence, 1097 Budapest, Hungary
4
Centre for Social Sciences, Eötvös Loránd University, 1053 Budapest, Hungary
5
Democracy Research Unit, Department of Political Science, University of Salamanca, 37008 Salamanca, Spain
*
Author to whom correspondence should be addressed.
Soc. Sci. 2021, 10(12), 460; https://doi.org/10.3390/socsci10120460
Submission received: 9 September 2021 / Revised: 18 November 2021 / Accepted: 23 November 2021 / Published: 30 November 2021
(This article belongs to the Section Contemporary Politics and Society)

Abstract

:
Filtering strategies enable social media users to remove undesired content from their feeds, potentially creating homophilic environments. Although previous studies have addressed the individual-level factors and content features that influence these decisions, few have solely focused on users’ perceptions. Accordingly, this study applies social exchange theory to understand how users socially construct the process of unfriending. Based on 30 in-depth interviews with young Spaniards, we identify a widespread pattern of rejection over repetitive, opinion-challenging, and offensive posts, which we conceptualize as out-of-place content, a type of social media stimulus that hinders substantive online exchanges and challenges users’ understanding of social reality and individual values. This study contributes to current literature on unfriending by suggesting that filtering strategies are implemented gradually when posts overwhelm users’ tolerance threshold. Our findings also suggest that their deployment hinges on the closeness of the relationship between peers and social commitments formed in specific platforms. Future research is needed to assess to what extent the patterns identified in our interviews are present in the overall population.

1. Introduction

Social media platforms enable users to access a wide range of content and opinions published by people from all walks of life. A substantial body of literature agrees that their architecture is designed to build an environment based on users’ personal preferences that pleases them and that limits their exposure to opposing views (Skoric et al. 2018; Van Dijck 2013). Yet, evidence shows that users frequently encounter posts that they do not find interesting, disagree with, or strongly reject (Barnidge 2017; Beam et al. 2018; Cardenal et al. 2019). These posts may lead them to implement filtering strategies, by which they will homogenize their public sphere (John and Gal 2018) and prevent further exposure to similar unpalatable content (Zhu et al. 2017).
In recent years, a burgeoning line of inquiry about users’ filtering mechanisms in social media has flourished. These tactics have received increasing scholarly attention, given that they may thwart exposure to opposing political views, which is an essential aspect of well-functioning democracies (Kim and Chen 2015). Although extant research has provided important theoretical contributions on the content features that spark users’ rejections (e.g., Neubaum et al. 2021a; Skoric et al. 2018) and the role that different social connections may play in social media curation (e.g., Sibona 2014), scant attention has been paid to users’ perceptions of the online filtering process. Drawing upon social exchange theory (Homans 1958, 1961), this study aims to fill this gap in the literature by examining how users manage online exchanges of undesired content, how this content is comprised, and subsequent reactions towards it.
Our findings, based on in-depth interviews with 30 Spanish social media users, reveal a widespread pattern of rejection of repetitive, opinion-challenging, and offensive posts. We conceptualize these as out-of-place content: information stimuli that causes users to lose control over their feeds and hinders substantive online exchanges. Our findings contribute to the stream of literature on social media filtering by highlighting the role of social exchange theory. This theory plays a crucial role in the understanding of the gradual process of unfriending, which, as our findings illustrate, hinges on the closeness of a user’s relationship and the social commitment that stems from specific platforms.

2. Social Exchange Theory: A Route to Understanding Online Relationships

Social exchange theory (SET) addresses social behavior as a transaction of both tangible and intangible goods between at least two individuals (Homans 1958, 1961). It stresses that individuals shape their behavior in response to a cost/reward balance through which cost refers to the resources required to maintain a relationship and reward is the benefit obtained from it. Accordingly, reinforcement and reciprocity permeate the success or failure of social relationships.
Early work on SET has enabled an outline of three categorizations of reciprocity within an exchange (Cropanzano and Mitchell 2005). First, there is reciprocity as a transactional pattern of interdependent exchanges, which refers to the idea that one party’s behavior is contingent with that of the other (Blau 1964; Homans 1961; Molm 2000, 2003), and that parties minimize risks and encourage cooperation (Molm 1994). Second, there is reciprocity as a folk belief, which is linked to the notion that “everyone gets what they deserve” (Gouldner 1960; Malinowski 1932), the principle of universal justice or “just world” (Lerner 1980), and the concept of karma (Bies and Tripp 1996). Finally, there is reciprocity as a moral norm, which punishes those who do not comply (Malinowski 1932; Mauss 1967). Researchers have also argued that reciprocity is directly proportional: the negative or positive nature of someone’s actions is reflected in the responses that person receives (Eisenberger et al. 2004).
Another founding author of SET, Peter Blau (1964), introduced the intentionality and voluntariness of the interaction as another key point and proposed different types of interpersonal relationships, emphasizing power as a result of unilateral dependencies. Similarly, Emerson (1962, 1976) conceived of power as an inherent factor of social exchange and concurred with other scholars that the position each individual occupies in a relationship determines their use of power and, therefore, their reward (Markovsky et al. 1988; Skvoretz and Willer 1993). From these power structures also emerge commitments and prospective social obligations for the parties that determine the outcomes of exchanges (Emerson 1976).
Social media platforms provide a prime example of continuous exchanges between individuals from all contexts. They play a central role in everyday public discourse and have turned online engagement into a “global phenomenon” (Wanga and Liu 2019). On these grounds, SET may lend itself well to the understanding of social relationships and individuals’ behavior in the digital realm. Despite previous studies suggesting that these online spaces have the potential to change existing forms of social relationships (McFarland and Ployhart 2015), individuals mostly maintain the traditional structure of trust circles when they are online and mainly interact with others with high-level subjective knowledge (Xiao et al. 2012). Social media allows for anonymity (Anduiza et al. 2009), a plausible pathway to diminish the costly privacy risks that users fear in online exchanges (Liu et al. 2016). Sharing information and engaging on social media may also potentially fulfill the main rewards that users expect from these platforms: social acceptance and recognition (Wasko and Faraj 2005; Yan et al. 2016).
Moreover, online ‘friendships’ redefine the meaning of cost. These relationships require less commitment and reciprocity and are, therefore, less costly and more likely to become long-lasting (McFarland and Ployhart 2015; Surma 2016). Bradley et al. (2019) applied SET to understand how users behave with others and perceive their likability, in addition to how this affects online connections. The results showed that the excessive self-promotion of users would annoy their networks, implying less reciprocity and group benefit. In order to bypass similar negative exchanges, users tend to personalize their digital connections following reinforcement-seeking motivations that spur them to choose the most rewarding alternatives (Knobloch-Westerwick 2014). In this regard, scholars have long noted the tendency of individuals to associate with others who are similar to them (McPherson et al. 2001), a phenomenon known as homophily that social media may replicate, and which leads to the creation of echo chambers (Garrett 2009; Sunstein 2018) and filter bubbles (Pariser 2011).
These two phenomena overlap continuously and imply that individuals create online spaces to guarantee their exclusive exposure to confirming opinions (Garimella et al. 2018). At the same time, social media algorithms monitor users’ activity, capturing their true interests and needs and providing an environment in which they feel comfortable (Van Dijck 2013). According to Geschke et al. (2019), these features converge in a “triple-filter bubble” that prevents the exposure of users to cross-cutting content.
However, extant research has provided rather inconclusive findings, suggesting that social media platforms are actually rather heterogeneous and often expose their users to content that they do not agree with or are not interested in (Barnidge 2017; Beam et al. 2018; Cardenal et al. 2019). Several rationales may explain this relative heterogeneity. First, although people may actively create connections based on their potential rewards, underutilization of this affordance may occur given the ease of making online connections. This may lead to the proliferation of weak ties (Kim and Chen 2015), which form the primary source of heterogeneity (Granovetter 1973). Furthermore, algorithms seem to prioritize personal connections and may show ideologically diverse content if it is published by someone with whom a user frequently interacts (DeVito 2017). Another possible reason could be the lack of selective avoidance, which suggests that while people deliberately seek like-minded and reinforcing information, they do not avoid dissimilar views when they inadvertently stumble upon them (Garrett 2009; Goyanes et al. 2021). Additionally, studies have shown that family and personal friends might influence a user to click on the counter-attitudinal content they recommend (Anspach 2017).
All things considered, even though users generally prefer a homogenous climate of information and platforms make efforts to adjust their environments to individual preferences, usage patterns still result in a rather heterogeneous context, which enables users to partake in exchanges with a wide variety of perspectives, and learn from them, even incidentally (Barberá 2014; Eady et al. 2019; Gil de Zúñiga et al. 2021). However, this diverse information landscape poses additional challenges for individuals. As mentioned, it makes them confront content that they oppose, and it also exposes them to other less civil forms of individual expression, such as hate speech or uncivil political discussions (Alkiviadou 2019; Vargo and Hopp 2017) that heighten the hostility of users (Rösner et al. 2016).

3. Social Media Filtering: Users’ Responses to Undesired Content

Score of studies have suggested that social media filtering is a plausible pathway that users activate when they are exposed to content that they somehow dislike (e.g., Goyanes and Skoric 2021; John and Agbarya 2020; Sibona and Walczak 2011). Social media allows posts of all kinds, but it also enables users to freely conduct curating strategies in their feeds such as unfriending (Sibona 2014). Through these tactics, users engage in selective avoidance and break the social link with the source of the undesired information, thereby preventing further exposure to similar content (Skoric et al. 2018; Zhu et al. 2017). These mechanisms have the potential to perpetuate the creation of homophilic environments (Skoric et al. 2018) and have thus been a focus of research over recent years. In general, scholars have shown that unfriending goes hand in hand with political matters (Goyanes and Skoric 2021; John and Gal 2016; Zhu et al. 2017) and is more likely to emerge when users perceive disagreement within their online network or come across a message with which they take issue (Bode 2016; Rainie and Smith 2012). Still, some studies have demonstrated that unfriending is also positively associated with incivility and even unimportant and overly frequent posts (Goyanes et al. 2021; John and Agbarya 2020; Sibona and Walczak 2011).
In addition, people have different motivations for using social media and different expectations, through which they appraise the content they encounter. Users with strong self-presenting and socializing desires are more willing to share and consume political content than those whose main use revolves around entertainment (Choi 2016). At the same time, the latter group is more likely to be exposed to political content incidentally (Heiss and Matthes 2019), although those who are not interested in politics skip political posts more often (Bode et al. 2017). Accordingly, filtering strategies are typically employed by individuals who regularly use social media, have a strong ideology, and are interested on political discussions and engage in them (Bode 2016; John and Dvir-Gvirsman 2015; Skoric et al. 2018; Yang et al. 2017; Zhu et al. 2017).
Furthermore, users not only ponder the content itself, but also those who have posted it. Evidence has shown how people are more likely to click on posts if they are mediated by strong ties (Anspach 2017; Kaiser et al. 2021) and typically consider content to be more credible if the sharer is perceived as ideologically like-minded (Lee et al. 2018). Similarly, online users appear to be more tolerant towards defiant content when they share a strong bond with the person who posted it (Valenzuela et al. 2018), whereas curation mechanisms may have consequences in face-to-face relationships (Yang et al. 2017). The opposite occurs with acquaintances and distant friends, who are the most common targets of filtering strategies (John and Dvir-Gvirsman 2015; Sibona 2014). Other recent studies have highlighted how a user’s tolerance threshold seems to increase with relationally close users who provide them with emotional support (Neubaum et al. 2021b). Neubaum et al. (2021a) emphasized that unfriending hinges both on the relative closeness individuals have and the perceived severity of the online disagreement. Ultimately, as John and Gal (2018) pointed out, each social media user has a “personal public sphere” on their networks where they behave according to both standard rules of the public sphere and the rules they create themselves.

4. Problem Statement and Research Questions

Extant research on unfriending has addressed the potential individual-level factors (Bode 2016; Skoric et al. 2018) and content features (Yang et al. 2017; Zhu et al. 2017) that potentially influence users’ filtering tactics in social media. These studies, which largely relied on quantitative data, have typically sought to empirically test why and under what conditions social media users unfriend (Bode 2016; Goyanes et al. 2021). Despite the laudatory efforts to account thoroughly and quantitatively for the behavioral and cognitive antecedents of this phenomenon (Skoric et al. 2018), limited evidence has thus far inductively examined how users socially construct the process of social media unfriending and how this avoidance behavior is modeled by users’ social media affordances and ecologies. It has been argued that even though social media platforms potentially afford users a plethora of heterogeneous linkages and politically inconsistent exposure (Barberá 2014; Eady et al. 2019), users may also implement curation tactics to avoid opinion-challenging, offensive, and repetitive content by adjusting their feed to their cognitive volition. Thus, this study aims to further understand users’ tolerance threshold to undesired content and the behavioral reactions that they may activate via unfriending. To do so, the following research questions are posed:
RQ1:
What content features encountered in social media display users’ rejection?
RQ2:
How and under what circumstances are users’ rejections tackled by unfriending?

5. Materials and Methods

To answer the research questions, 30 in-depth interviews were conducted, offering interviewees the freedom to articulate complex thoughts and perceptions (Kallio et al. 2016). Young Spanish people were recruited as respondents for two main reasons: (1) the growing salience of social media in young people’s lives (Interactive Advertising Bureau Spain 2021) and (2) the higher tendency of this group to unfriend and hide contacts in comparison to older users, especially when they self-segregate within politically homogeneous groups (Yoo et al. 2018).
For the recruitment of the sample, we used a snowball sampling technique. First, we contacted potential participants individually to interview them and assess whether they were interested in participating in the study. At the end of the interview, each interviewee was requested names of two to four acquaintances who may have been able to contribute to the study (See Boczkowski et al. 2018). On a random basis, some of the named acquaintances were approached, and others were placed on a waiting list. This procedure was repeated with each person who was subsequently interviewed. Moreover, in order to be eligible for our study, potential participants needed to regularly use and check social media platforms like Instagram or Twitter. Those who did not meet this criterion were excluded from the final sample. Even though common standpoints and consensus clearly arose at an early stage of the interviewing process, we deliberately conducted a large number of interviews to obtain saturation of ideas (Goodman 1961). The age of our participants ranged from 18 to 30 years, although most were aged between 18 and 24 (n = 28). Table 1 provides a brief description of the sample characteristics.
While snowball sampling techniques are not able to yield a widely diverse composition of respondents, our sample included people from different cities and regions who had varying educational levels and ideologies and formed different cohorts of ages, experiences, and viewpoints. Nonetheless, students are largely overrepresented among respondents. Our participants, however, reflected great heterogeneity, both in their use of social media and their attitudes towards the content they interacted with.
All the interviews, which guaranteed the respondents’ confidentiality, took place between March and April 2020 during the COVID-19 outbreak. Consequently, due to lockdown and mobility restrictions in Spain, the vast majority of the interviews were conducted via video call (n = 28), while the rest were face to face (n = 2). In general, the interviews lasted between 15 and 30 min, and they were digitally recorded and transcribed verbatim. A general set of questions on demographic characteristics prior to each interview were carried out. These questions canvassed the respondent’s age, education, and personal background.
The interview guide was comprised of three sections, namely, ‘general social media use’, ‘disturbing content’ and ‘unfriending strategies’. The first of these concerned the participants’ preferences and general social media use, including the differences they encountered between social media platforms. The second section concerned the participants’ exposure to dissonant or counter-attitudinal content on social media. The questions addressed the nature and characteristics of such stimuli, the main topics that were addressed, and their ulterior emotional reactions. The third section focused on the dynamics of user filtration and unfriending on social media. Specifically, we examined the participants’ tolerance threshold, their attitudes towards the opinion-challenging content they encountered, and the interplay of motivations, content features, interpersonal relationships, and platform expectations that affected their attitudes and response to this content. The interviewees were specifically asked to answer with as many examples as possible.
For the thematic analysis, we followed the six-phase analytical procedure proposed by Braun and Clarke (2006) to assure systematization and transparency in the data. Accordingly, we first transcribed the interviews and read them multiple times in order to familiarize ourselves with the data and generate the initial codes. Then, we identified similar patterns across participants’ testimonies and collated these codes into potential themes. These themes were reviewed, defined, and named. Finally, we selected the compelling examples, and carried out a final analysis.

6. Results

6.1. General Social Media Use and Personal Motivations

Extant research has suggested the integral role of social media in people’s lives and, in fact, a large majority of our participants acknowledged spending more than three hours per day on these sites. Additionally, our findings emphasize the growing relevance of these platforms in establishing personal relationships and invigorating in-group belonginess: “Everyone my age uses them” (P1). This pervasive social media use is in response to a media ecology characterized by the omnipresence of news, so that our respondents needed “to keep up with what’s going on” (P12) in order to effectively “communicate” (P27) with others. These uses, and the gratifications they bring, turn social media into a digital ecosystem specifically designed for passing time and socializing. Our interviewees mainly use Instagram, Twitter, and Facebook for these goals. However, not all social media are uniformly relevant for young adults. According to our findings, Facebook was typically considered to be a platform “for older people,” while Instagram and Twitter, were “the most interesting and fun” (P20) and usually associated with younger users.
Our evidence also suggested that users would increasingly access social media platforms to be informed about public affairs and politics. For instance, Participant 10 considered Twitter to be a personalized newspaper and acknowledged using it typically to check what had been happening in politics. Moreover, as a by-product of the small messages and constant update of pictures, our respondents believed that they were receiving a constant flow of information that would shape their daily news diet. However, as stated by Participant 24 and echoed by other testimonies: “I try not to get my information through social media, but people post about everything, and the information always ends up finding me.” Although this social media effect suggests that users may be superficially informed about current events and public affairs through these ecologies, many of them do not exclusively trust these online sites for their news consumption, as Participant 18 noted: “Once I see that something has happened on Twitter, I look for more information on other sites.”
Overall, most of our respondents considered that their principal use of social media was for entertainment purposes. Although Instagram was the favorite platform for feeding such gratification, some respondents held a dissenting perspective and considered Twitter instead. According to our evidence, users of both platforms would consume content of every kind, including memes, funny posts, recipes, and tutorials. However, it was evident that users had different motivations for using each platform:
“On Instagram, as it is more visual, I see photos, but I don’t upload a lot of things. Twitter is where I go if I want to have a laugh. I also use [Instagram and Twitter] for information, but I use them for different things.”
(P7)

6.2. Opinion-Challenging, Offensive, and Repetitive Content on Social Media

The plethora of posts users are exposed to on social media allows them to find “new things” and interesting content to consume. However, given this potentially diverse exposure, users may also stumble upon posts that they dislike and reject. These posts, which we conceptualize as out-of-place, typically challenged most users’ understanding of social reality and individual values (including ideology). As Participant 11 put it: “I’m on social media to entertain myself and have a good time. I don’t want to feel discomfort.” This rejection hinges on the fact that, for most respondents, social media was typically considered a domain for entertainment, or as participant 4 put it: “For having a good time.” Furthermore, since these platforms allow users to follow and choose who they are friends with, the majority of participants expected to find posts that pleased them. In other words, out-of-place content causes users to lose control of their feeds, which eventually makes them feel uncomfortable and triggers rejection, as Participant 6 put it: “All posts have similar themes. People share their opinions when they shouldn’t, and I don’t use social media for that. I use it to have fun, not to influence other people.”
Throughout our evidence, three types of out-of-place content were typically portrayed: opinion-challenging, offensive, and repetitive. Opinion-challenging content are posts that represent views that conflict with the recipients’ pre-existing opinions. Users feel a post to be offensive when its tone is uncivil or use defamatory or exclusionary language. Repetitive posts are content types that appear on users’ news feed too frequently. These stimuli stem from the same root cause: they are not what our respondents expect to encounter. However, these types of content are also distinctive and will nurture different emotional and behavioral responses hinging on users’ individual traits and motivations. Similarly, social media creates a convergence of what appear to be two groups with opposing views: the in-group (the “selfs”), who have a specific (mainly political) criteria by which they appraise the posts they encounter, and the out-group (the “others”), whose (political) views challenge the former and spark disagreement. These political discussions, which some interviewees directly linked with hate speech, significantly energized discourses of alterity and seemed to be the most common trigger of opinion-challenging and offensive stimuli. As one put it:
“There are many people who promote a lot of hate. Most of it is about political issues. As soon as a person publishes something that another person doesn’t like, the hate and the fighting begin.”
(P11)
However, according to our evidence, opinion-challenging and offensive stimuli were not only related to voting decisions or party politics, but also to other politics-related issues, including feminism, fascism, and racism. In this sense, our respondents’ perspectives regarding offensive posts seemed to be contingent upon not only the topic of the post itself, but also the perspectives held by others. For instance, Participant 5, when addressing dissonant views, clarified: “It is not so much the subject that is chosen that bothers me per se, but the approach to tackle it.” He/she also considered that many people shared his/her opinion “mostly about politics but have no idea and think poorly.” In the same vein, repetitive content is also related to political and (random) personal thoughts, including photos. For instance, Participant 1 considered that he/she would frequently feel overwhelmed when “users upload a lot of posts.” Similarly, Participant 19 complained: “I don’t know why people have to post on social media what they think all the time.”
In addition, respondents considered each social media platform as having its own affordances, which permeate through the way they would cognitively appraise the content they encountered. Thus, for instance, Instagram was mostly associated with repetitive posts, and Twitter with offensive and opinion-challenging content. As illustrated by one of our participants: “On Twitter, what typically annoys me is football-related comments, like comments criticizing my team, and offensive tweets. On Instagram it’s not usually offensive; it’s more about how much repetitive, and therefore tiring, the content is” (P15).
In the words of Participant 6, social media posts simply mirror society: “People are a little ignorant.” Beyond these thematic patterns, when it comes to making sense of opinion-challenging and offensive content, the testimonies pointed to a fairly common approach: most posts typically addressed issues and causes with which the users strongly identified. As many of our interviewees believed, many people use social media to openly expose what they stand for, to discuss politics, and even to persuade others. Nevertheless, as noted by Participant 12, “Many people don’t think what they write and radicalize their opinion so that it reaches more users.” However, he/she also pointed out that “in politics, everyone has their own ideas, and they publish what they think is right.” In some cases, posts would critically cross a line and trigger rejection and disagreement. As Participant 9 acknowledged: “There are stupid days, and there are people who are stupid all day. Still, in Spain, there is freedom of expression, and therefore you can post whatever you want on your profile.”
Interviewees provided myriad insightful opinion-challenging and offensive examples that they considered to have “crossed the line.” Certainly, social media platforms are domains where users post about all kinds of topics that are not uniformly pleasing for everyone. For instance, an ingredient that notably offended interviewees was the massive circulation of fake news. Many believed that it has become a standard and widespread practice for users to lie and amplify fabricated information (P4). As explained by Participant 17: “This behavior is increasingly common,” specifically on Twitter, “where a lot of people distribute fake news.” This causes hoaxes to disseminate and quickly trigger hate speech. As Participant 15 stated: “There are many hoaxes and a lot of stupid things that people share without thinking, and that’s how they spread. It makes me angry.”
Most of our female respondents identified offensive content that would attack feminism and women rights. Participant 3, for instance, explained:
“Posts that I don’t like are, for example, ones on the subject of abortion. I understand that people have different opinions, and some don’t agree with this idea, but women are free to decide whether they have a baby or not.”
Germane to this, some respondents also acknowledged a positive side of encountering, and sometimes interacting with, ideologically divergent posts. As Participant 17 pointed out: “I like the ideological richness of social media and following people with different ideas than mine. On the one hand, it makes me angry, but on the other, I also value it.” Echoing this perspective, Participant 2 acknowledged that “I want to see what these people [with differing views] think. But when I see these posts… I look through the discussions in the comments, but I never participate.”

6.3. Social Media Discussion, Tolerance and Unfriending

How to cognitively and behaviorally react upon out-of-place content was crucial in our respondents’ approach to social media curation. On these platforms, posts can quickly evolve into convoluted arguments with several users involved. To avoid such confrontations, our respondents would typically neglect to offer direct reactions. As noted by Participant 2, “I keep my opinions to myself, so as not to argue when I think differently. When I see these posts, I feel indifferent.” Even though several interviewees indicated that they preferred not to engage or interact with out-of-place posts, there was no consensus of their reactions. As a rule of thumb, the most common reactions were of anger and indifference:
“Reading these tweets makes me feel angry, and I become a little upset because I have such a different view of the world. In my Twitter I want to see things that I like, not those that I don’t like. That’s why I prefer not to follow those people.”
(P13)
Likewise, Participant 16 indicated that he/she felt sorry for the way society is developing and went on to explain that “I know that not everyone can think like me, although I am mainly indifferent to such posts.” Most of our participants shared this feeling of apathy and only one acknowledged directly engaging in strong arguments on social media. Does this mean that our respondents’ tolerance for out-of-place content is high? When asked about this, most considered themselves to be tolerant, but only “to a certain extent” (P22). Participant 4 provided an illustrative example:
“I consider myself tolerant, I am a very open person, I always listen to people. When I read comments that might bother me, I wonder how a person could think that way, but it doesn’t particularly bother me.”
However, throughout our testimonies, there seemed to be a specific limit, or threshold of tolerance, for anything concerning morality, human rights, or the dignity of people and animals. Accordingly, most participants stated that they could not stand “fascist, sexist, or racist comments” (P7) and argued that such posts would be “intolerant already” (P29). The paradox of intolerance was therefore documented by our interviewees: everything has a threshold, and that includes tolerance. Consequently, people who the respondents considered to be intolerant would not typically be tolerated. Another participant clarified this conundrum in the following terms:
“I consider myself to be tolerant as long as I’m not faced with opinions that attack a person’s dignity. In politics, I am very respectful. Everyone has the right to have their own opinion, as long as it is dignified. You can’t cross that line.”
(P12)
Other participants argued that their tolerance would hinge on the subject being addressed. Participant 28 pointed out that he/she was not tolerant when people would “utter atrocities or statements against some groups of people.” Even though most participants considered themselves to be tolerant when coping with those “others” who thought differently, some respondents felt hesitant when reflecting upon their level of tolerance on social media, as the following quotes illustrate:
“It bothers me a lot when people think differently from me or have thoughts that I think are wrong. Maybe that’s not being tolerant; I don’t know.”
(P7)
“I consider myself to be tolerant; I want to believe that I am. I always listen to other people’s opinions and don’t try to change them. Still, it makes me angry that people are not equally tolerant.”
(P10)
Few interviewees acknowledged a less tolerant threshold, such as Participant 14. He/she acknowledged that he/she “could be much more tolerant” and would like to be, but despite his laudatory efforts, he/she would typically get angry when users published content that did not “have a moral basis.” When such levels of immorality and radicalism are breached, most interviewees would respond by deploying social media filtering tactics. Although one of our respondents considered there to be no need to mute or ignore friends and/or followers for their opinions or behavior, this appeared to be the common outcome. This usual filtering practice is mainly linked to the social feasibility of the platforms. When a user mutes someone in their feed, this is completely ignored by the offender, and the dyadic relationship is not publicly broken. As Participant 1 concluded: “It is the most comfortable reaction” as nobody would get hurt.
Another common and more drastic response to addressing out-of-place content is through unfollowing/unfriending. Although most of our participants generally accepted freedom of speech on social media and were aware of the challenging opinions they would most likely encounter in this media environment, they largely embraced unfriending as a means to reinforce their autonomy. Such decisions would be taken through a simple equation, which Participant 6 elegantly described as follows: “People are entitled to provide their opinion, and I am entitled to unfollow them.” According to our findings, this reaction was mostly triggered when respondents faced offensive content that insulted other individuals or groups, and it was addressed by many participants.
As for the rest of the reactions, the most extreme ones (blocking and reporting) were highly uncommon and also mostly associated with offensive posts. In addition, users would use this approach gradually. Participant 18 stated that “if the person who publishes a post is going too far, I unfollow him. And if it is already been an exaggerated attack, I will report the account.” As mentioned, very few users directly engage in political discussions. However, many participants acknowledged having reported, muted, or unfollowed people because of their unrestrained political opinions.
As seen throughout our testimonies, users would typically begin discussions only when they personally knew and appreciated the other. An additional dimension comes into play here: real life friendships. There was widespread agreement among our respondents that the level of anger and reaction to out-of-place content is indeed directly proportional to the degree of friendship with that user in real life. “I unfollow acquaintances, but never friends,” said Participant 4. As explained by Participant 15: “I have never unfollowed someone I had a strong friendship with. You don’t have to put an opinion before a relationship. Friendship is worth more; social media are secondary.” In the same vein, Participant 17 pointed out:
“I have never stopped following a friend because of something they have posted. I have friends who think differently to me, but I value their friendship more. I have a bond with that person, I like them; we have a relationship and debate sometimes. Friendship is more important.”
For our respondents, friendship would usually come first, and dialogue was a potential way out for resolving out-of-place posts, especially opinion-challenging or offensive ones. Participant 28 explained that “if a friend publishes something I don’t like, I will tell them. I will discuss it with them because I have friends of all ideologies; I try not to live in a bubble of my own ideology.” This reaction, according to our respondents, is akin to real life. Participant 8 even acknowledged that, if a political discussion involved a close friend, “I send them alternative content to keep them better informed.” However, these such situations appear to be less frequent as for friends, since, they are often ideologically like-minded, as the following participant reflected: “I have never had a friend post an offensive content. The people I hang out with usually have opinions similar to mine and if they think differently, at least they don’t say it.” (P23)
In addition, friendships seemed to be complemented by a social commitment that would extend beyond the boundaries of social media and thwart any effort to unfollow friends because “they are actual friends” (P13) and “I don’t want him to realize that I’ve unfollowed them” (P29). This participant went even further, stating that “you have to respect your friends, even if they are a little bit stupid.”
Finally, our interviewees emphasized the contingent factors that determine users’ reactions on different social networks. The general pattern was to relate Instagram to the abovementioned idea of social commitment: “On Instagram, there is a commitment; there are people I follow because I have to follow them,” stated Participant 3. Participant 13 even went as far as to say it was “ugly” to unfollow someone on that social media platform. Not only that, one participant had even created a new account in order not to unfollow friends on Instagram. On Twitter, in contrast, the typical approach was simply to unfollow because “the commitment that exists on Instagram does not exist here” (P22).

7. Discussion and Conclusions

The digital space of social media enables users to engage in myriad online exchanges. Prior research has demonstrated that even in these highly personalized environments, individuals frequently stumble upon posts that they object to (Barnidge 2017; Beam et al. 2018; Cardenal et al. 2019), and that may lead them to implement filtering mechanisms. In our interview-based research, we investigated the types of content that users reject and the circumstances in which they deploy filtering strategies as a response to it. Drawing upon social exchange theory, our study provides a better understanding of the rationales that shape users’ management of online connections, undesired content, and filtering strategies. Specifically, based on in-depth interviews with 30 Spanish social media users, our findings identify what content features disturb users and how they implement the mentioned mechanisms on these platforms.
First, our findings highlight how online relationships—just like their offline counterparts (Homans 1958, 1961)—are mediated by reciprocity and reinforcement. Users navigate social media and engage in online interactions while expecting all parties to stimulate cooperation and meet the required levels of reciprocity (Cropanzano and Mitchell 2005). This “folk belief” was shown throughout our recipients, who nonetheless would frequently engage in negative interactions that failed to meet the expected benchmark of reciprocity. According to our evidence, such exchanges entail repetitive, offensive, and opinion-challenging posts, and are often pondered as being too costly. We have encompassed these unwelcome posts in what we conceptualize as out-of-place content and explained its rationales and implications. Specifically, the concept of out-of-place refers to posts that social media users reject and that break the cost/reward balance of online connections by challenging users’ understanding of social reality and individual values.
Throughout our testimonies, we found three dimensions of out-of-place content: (1) opinion-challenging posts that confront users’ ideological positions; (2) offensive political and non-political content against human rights or specific social groups, such as hate speech; and (3) repetitive content linked to an individual’s feeling of saturation in the face of overly frequent posting. As previously highlighted, most users reject online political content, especially when it is perceived to be contrary to their views (Bode 2016; Rainie and Smith 2012). However, our conceptualization of out-of-place content nuances this perspective, stressing that rather than being political, such content is associated with polarizing messages that dichotomize discourse in the dialectic of “the selfs” versus “the others.” Thus, undesired online content may also include non-political subjects such as sports. Moreover, according to our evidence, hate speech, extreme and exclusionary language, and fake news are the most salient triggers, rather than cross-cutting exposure. In line with previous research, our participants also tended to oppose what they considered to be low-quality content, such as ignorant, repetitive opinions, and posts linked to users’ self-promotion (Bradley et al. 2019; Sibona and Walczak 2011).
Second, our findings cast light on users’ thresholds of tolerance and response to undesired online content through the process of unfriending. When users receive out-of-place content, the online exchange unavoidably falls short, costing more than it rewards, potentially jeopardizing online relationships. Although some interviewees appreciated these types of posts because they enabled them to find out what ‘the others’ think, the majority rejected their exposure and perceived them to be bothersome. Nonetheless, individuals rarely confront and discuss issues with other users, and instead feel entitled to penalize the source of the displeasing post, typically by unfriending or other filtering strategies that are conducted gradually. This reaction, however, does not imply a high degree of tolerance, but rather reinforces the lower involvement of the user in online relationships and the lower risks they face in this environment (Liu et al. 2016).
Moreover, even though users are inclined to reject out-of-place content, these posts do not immediately trigger users to break online connections nor conduct other content-filtering strategies. Our findings illustrate how post hoc user filtration is a gradual process that users implement when posts cross the line. Accordingly, once the negative nature of an online exchange surpasses the tolerance threshold of users, they punish the source with an equivalent reaction by employing a content filtering tactic. For instance, according to our evidence, blocking and reporting—the most drastic measures—are only undertaken following highly offensive messages. In short, as Eisenberger et al. (2004) pointed out, in social exchanges, reciprocity is directly proportional in both negative and positive interactions.
Our third contribution is to provide insights on users’ employment of filtering strategies. As mentioned before, offensive posts display users’ rejection the most. Likewise, they are usually the ones that overwhelm users’ tolerance, pushing them to unfriend and end the unbalanced relationship. In this regard, although a good deal of research on unfriending has focused on its links to cross-cutting exposure (e.g., Bode 2016; John and Dvir-Gvirsman 2015; Yang et al. 2017), our evidence suggests that users’ filtering strategies against out-of-place content are more related to offensive posts involving hate speech or fake news. Our findings are thus more aligned with recent research (Goyanes et al. 2021; John and Agbarya 2020; Neubaum et al. 2021a) that has noted how these tactics do not necessarily respond to a homophilic construction of the feed and denote instead the strong rejection of users to online incivility and hate speech.
Finally, our findings highlight that the employment of the mentioned strategies not only hinges on the characteristics of the content, but also on its source and platform. On the one hand, an existing social relationship with an individual from whom the out-of-place content originates plays an important role on the outcome of the exchange. In this regard, our evidence suggests that the most extreme filtering strategies are not undertaken with close offline friends, but rather with acquaintances or distant contacts, as previous research has indicated (John and Dvir-Gvirsman 2015; Neubaum et al. 2021a). According to our participants, real friendship makes the cost worth paying, implying offline exchange rules prevail over online ones. However, our evidence suggests that users do not dismiss the online reciprocity guidelines of social exchanges in their entirety and engage in dialogue to solve the negative exchange and restore the balance to the online relationship.
On the other hand, platforms’ specificities also matter. As well as reflecting users’ commitment to social relationships (Emerson 1976), our findings indicate that there is also a commitment to platforms that varies between sites and ultimately hinders the use of filtering mechanisms regarding out-of-place content. For instance, our interviewees emphasized that the degree of engagement on Instagram was much higher than on other social media platforms such as Twitter. This inherent commitment makes users on Instagram more permissive towards opinion-challenging content that does not jeopardize human rights or touch upon personally sensitive topics, and it prevents them from using filtering strategies. This is not the case with the more impersonal Twitter, where connections are typically much less demanding, and users are thus less tolerant to offensive content.
Overall, our findings suggest that personal relationships and platform differences are crucial to understand filtration tactics. Users’ interactions with social media platforms are more complex than it is usually presumed: they are responsive not only to the content, but also to the social and technological contexts of the particular messages, which shape their reactions as well. This complex response may be one of the reasons why most users are not closed into homogenous echo chambers within their social media feeds. Another important lesson that can be drawn from our research is that the underutilization of these tactics cannot be only explained by the lack of knowledge about them. Our respondents are aware of the existence of filtering mechanisms, but rarely apply them as an informed and conscious decision. All things considered, our results show that respondents appreciate the rewards resulting from their social connections more than the costs of being exposed to out-of-place content.
Naturally, although our research advances the existing, mostly quantitative, knowledge on social media’s filtering strategies, it is not devoid of certain limitations. First, our sample is not very diverse demographically, and mostly represents a tech-savvy and social media native layer of the population. While this sample is suitable to detect multiple and non-knowledge-related causes of the underutilization of filtering mechanisms, we would probably find different patterns if we looked at less educated or older cohorts of the population. Future studies are needed to map the usage patterns of the wider social media user population. The timing of our research represents another limitation. The pandemic and the resulting restrictions may have triggered deleterious emotional effects in our respondents presumably affecting both the content they were exposed to in these days, and their responses to them. It would thus be useful to replicate this study in different time periods to confirm the generalizability of our findings. Moreover, in order to validate our findings on representative samples, several observations from our research could be translated into survey questions, such as the role of social relationships and platform differences in the responses to out-of-place content.

Author Contributions

Conceptualization, B.J., A.C., M.G.; methodology, B.J.; validation, M.G.; formal analysis, B.J. and A.C.; data curation, B.J. and A.C.; writing—original draft preparation, B.J., A.C. and M.B.; writing—review and editing, M.G.; project administration, B.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Program of I+D+I (RTI2018-096065-B-I00) oriented to the Challenges of Society and the European Regional Development Fund (ERDF) about “Nuevos valores, gobernanza, financiación y servicios audiovisuales públicos para la sociedad de Internet: contrastes europeos y españoles.”Azahara Cañedo receives funding from the European Regional Development Fund (ERDF), call 2020/3771. Márton Bene is a recipient of the Bolyai János Research Fellowship awarded by the Hungarian Academy of Sciences (BO/334_20).

Institutional Review Board Statement

Ethical review and approval were waived for this study since we informed all participants about the nature of the research and intended dissemination of the results before carrying it out. We guaranteed their confidentiality and privacy by anonymizing their names, and they all verbally consented to participate.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alkiviadou, Natalie. 2019. Hate speech on social media networks: Towards a regulatory framework? Information & Communications Technology Law 28: 19–35. [Google Scholar]
  2. Anduiza, Eva, Marta Cantijoch, and Aina Gallego. 2009. Political participation and the Internet: A Field Essay. Information, Communication & Society 12: 860–78. [Google Scholar]
  3. Anspach, Nicolas M. 2017. The New Personal Influence: How Our Facebook Friends Influence the News We Read. Political Communication 34: 590–606. [Google Scholar] [CrossRef]
  4. Barberá, Pablo. 2014. How social media reduces mass political polarization. Evidence from Germany, Spain, and the US. Job Market Paper, New York University, 1–46. [Google Scholar]
  5. Barnidge, Matthew. 2017. Exposure to Political Disagreement in Social Media Versus Face-to-Face and Anonymous Online Settings. Political Communication 34: 302–21. [Google Scholar] [CrossRef]
  6. Beam, Michael A., Jeffrey T. Child, Myiah J. Hutchens, and Jay D. Hmielowski. 2018. Context collapse and privacy management: Diversity in Facebook friends increases online news reading and sharing. New Media & Society 20: 2296–314. [Google Scholar]
  7. Bies, Robert J., and Thomas M. Tripp. 1996. Beyond distrust: Getting even and the need for revenge. In Trust in Organizations. Edited by Roderick M. Kramer and Tom Tyler. Thousand Oaks: Sage, pp. 246–60. [Google Scholar]
  8. Blau, Peter M. 1964. Exchange and Power in Social Life. New York: John Wiley. [Google Scholar]
  9. Boczkowski, Pablo J., Eugenia Mitchelstein, and Mora Matassi. 2018. “News comes across when I’m in a moment of leisure”: Understanding the practices of incidental news consumption on social media. New Media & Society 20: 3523–39. [Google Scholar]
  10. Bode, Leticia. 2016. Pruning the news feed: Unfriending and unfollowing political content on social media. Research & Politics 3: 2053168016661873. [Google Scholar]
  11. Bode, Leticia, Emily K. Vraga, and Sonya Troller-Renfree. 2017. Skipping politics: Measuring avoidance of political content in social media. Research & Politics 4: 2053168017702990. [Google Scholar]
  12. Bradley, Steven W., James A. Roberts, and Preston W. Bradley. 2019. Experimental Evidence of Observed Social Media Status Cues on Perceived Likability. Psychology of Popular Media Culture 8: 41. [Google Scholar] [CrossRef]
  13. Braun, Virginia, and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3: 77–101. [Google Scholar] [CrossRef] [Green Version]
  14. Cardenal, Ana S., Carlos Aguilar-Paredes, Carol Galais, and Mario Pérez-Montoro. 2019. Digital Technologies and Selective Exposure: How Choice and Filter Bubbles Shape News Media Exposure. The International Journal of Press/Politics 24: 465–86. [Google Scholar] [CrossRef]
  15. Choi, Sujeong. 2016. The flipside of ubiquitous connectivity enabled by smartphone-based social networking service: Social presence and privacy concern. Computers in Human Behavior 65: 325–33. [Google Scholar] [CrossRef]
  16. Cropanzano, Russell, and Marie Mitchell. 2005. Social exchange theory: An interdisciplinary review. Journal of Management 31: 874–900. [Google Scholar] [CrossRef] [Green Version]
  17. DeVito, Michael A. 2017. From Editors to Algorithms. Digital Journalism 5: 753–73. [Google Scholar] [CrossRef]
  18. Eady, Gregory, Jonathan Nagler, Andy Guess, Jan Zilinsky, and Joshua A. Tucker. 2019. How many people live in political bubbles on social media? Evidence from linked survey and Twitter data. Sage Open 9: 2158244019832705. [Google Scholar] [CrossRef] [Green Version]
  19. Eisenberger, Robert, Patrick Lynch, Justin Aselage, and Stephanie Rohdieck. 2004. Who takes the most revenge? Individual differences in negative reciprocity norm endorsement. Personality & Social Psychology Bulletin 30: 789–99. [Google Scholar]
  20. Emerson, Richard M. 1962. Power-dependence relations. American Sociological Review 27: 31–41. [Google Scholar] [CrossRef] [Green Version]
  21. Emerson, Richard M. 1976. Social exchange theory. Annual Review of Sociology 2: 335–62. [Google Scholar] [CrossRef]
  22. Garimella, Kiran, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Political discourse on social media: Echo chambers, gatekeepers, and the price of bipartisanship. Paper presented at the 2018 World Wide Web Conference, Lyon, France, April 23–27; pp. 913–22. [Google Scholar]
  23. Garrett, R. Kelly. 2009. Echo chambers online?: Politically motivated selective exposure among Internet news users. Journal of Computer-Mediated Communication 14: 265–85. [Google Scholar] [CrossRef] [Green Version]
  24. Geschke, Daniel, Jan Lorenz, and Peter Holtz. 2019. The triple-filter bubble: Using agent-based modelling to test a meta-theoretical framework for the emergence of filter bubbles and echo chambers. British Journal of Social Psychology 58: 129–49. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Gil de Zúñiga, Homero, Porismita Borah, and Manuel Goyanes. 2021. How do people learn about politics when inadvertently exposed to news? Incidental news paradoxical Direct and indirect effects on political knowledge. Computers in Human Behavior 121: 106803. [Google Scholar] [CrossRef]
  26. Goodman, Leo A. 1961. Snowball sampling. The Annals of Mathematical Statistics 32: 148–70. [Google Scholar] [CrossRef]
  27. Gouldner, Alvin W. 1960. The norm of reciprocity: A preliminary statement. American Sociological Review 25: 161–78. [Google Scholar] [CrossRef]
  28. Goyanes, Manuel, and Marko Skoric. 2021. Citizen (dis) engagement on social media: How the Catalan referendum crisis fostered a teflonic social media behaviour. Mediterranean Politics 27: 1–22. [Google Scholar] [CrossRef]
  29. Goyanes, Manuel, Porismita Borah, and Homero Gil de Zúñiga. 2021. Social media filtering and democracy: Effects of social media news use and uncivil political discussions on social media unfriending. Computers in Human Behavior 120: 106759. [Google Scholar] [CrossRef]
  30. Granovetter, Mark S. 1973. The Strength of Weak Ties. American Journal of Sociology 78: 1360–80. [Google Scholar] [CrossRef] [Green Version]
  31. Heiss, Raffael, and Jörg Matthes. 2019. Does incidental exposure on social media equalize or reinforce participatory gaps? Evidence from a panel study. New Media & Society 21: 2463–82. [Google Scholar]
  32. Homans, George C. 1958. Social Behaviour as Exchange. American Journal of Sociology 63: 597–606. [Google Scholar] [CrossRef]
  33. Homans, George C. 1961. Social Behaviour: Its Elementary Forms. New York: Harcourt, Brace & World, Inc. [Google Scholar]
  34. Interactive Adverstising Bureau Spain. 2021. Estudio Redes Sociales 2021. Available online: https://iabspain.es/estudio/estudio-de-redes-sociales-2021/ (accessed on 31 July 2021).
  35. John, Nicholas, and Aysha Agbarya. 2020. Punching up or turning away? Palestinians unfriending Jewish Israelis on Facebook. New Media & Society 23: 1461444820908256. [Google Scholar]
  36. John, Nicholas A., and Shira Dvir-Gvirsman. 2015. “I don’t like you any more”: Facebook unfriending by Israelis during the Israel–Gaza conflict of 2014. Journal of Communication 65: 953–74. [Google Scholar] [CrossRef]
  37. John, Nicholas, and Noam Gal. 2016. ‘I can’t see you any more’: A phenomenology of political Facebook unfriending. AoIR Selected Papers of Internet Research 6: 1–4. [Google Scholar]
  38. John, Nicholas A., and Noam Gal. 2018. “He’s got his own sea”: Political Facebook unfriending in the personal public sphere. International Journal of Communication 12: 18. [Google Scholar]
  39. Kaiser, Johannes, Tobias R. Keller, and Katharina Kleinen-von Königslöw. 2021. Incidental News Exposure on Facebook as a Social Experience: The Influence of Recommender and Media Cues on News Selection. Communication Research 48: 77–99. [Google Scholar] [CrossRef] [Green Version]
  40. Kallio, Hanna, Anna-Maija Pietilä, Martin Johnson, and Mari Kangasniemi. 2016. Systematic methodological review: Developing a framework for a qualitative semi-structured interview guide. Journal of Advanced Nursing 72: 2954–65. [Google Scholar] [CrossRef] [PubMed]
  41. Kim, Yonghwan, and Hsuan-Ting Chen. 2015. Discussion Network Heterogeneity Matters: Examining a Moderated Mediation Model of Social Media Use and Civic Engagement. International Journal of Communication 9: 22. [Google Scholar]
  42. Knobloch-Westerwick, Silvia. 2014. Choice and Preference in Media Use: Advances in Selective Exposure Theory and Research. London: Routledge. [Google Scholar]
  43. Lee, Tae K., Youngju Kim, and Kevin Coe. 2018. When Social Media Become Hostile Media: An Experimental Examination of News Sharing, Partisanship, and Follower Count. Mass Communication and Society 21: 450–72. [Google Scholar] [CrossRef]
  44. Lerner, Melvin J. 1980. The Belief in a Just World: A Fundamental Delusion. New York: Plenum. [Google Scholar]
  45. Liu, Zilong, Qingfei Min, Qingguo Zhai, and Russell Smyth. 2016. Self-disclosure in Chinese micro-blogging: A social exchange theory perspective. Information & Management 53: 53–63. [Google Scholar]
  46. Malinowski, Bronislaw. 1932. Crime and Custom in Savage Society. London: Paul, Trench, Trubner. [Google Scholar]
  47. Markovsky, Barry, David Willer, and Travis Patton. 1988. Power Relations in Exchange Networks. American Sociological Review 5: 101–17. [Google Scholar] [CrossRef] [Green Version]
  48. Mauss, Marcel. 1967. The Gift: Forms and Functions of Exchange in Archaic Societies. New York: Norton. [Google Scholar]
  49. McFarland, Lynn A., and Robert E. Ployhart. 2015. Social Media: A Contextual Framework to Guide Research and Practice. Journal of Applied Psychology 100: 1653–77. [Google Scholar] [CrossRef]
  50. McPherson, Miller, Lynn Smith-Lovin, and James M. Cook. 2001. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology 27: 415–44. [Google Scholar] [CrossRef] [Green Version]
  51. Molm, Linda D. 1994. Is punishment effective? Coercive strategies in social exchange. Social Psychology Quarterly 57: 75–94. [Google Scholar] [CrossRef]
  52. Molm, Linda D. 2000. Theories of social exchange and exchange networks. In Handbook of Social Theory. Edited by George Ritzer and Barry Smart. Thousand Oaks: Sage, pp. 260–72. [Google Scholar]
  53. Molm, Linda D. 2003. Theoretical comparisons of forms of exchange. Sociological Theory 21: 1–17. [Google Scholar] [CrossRef]
  54. Neubaum, German, Manuel Cargnino, and Jeanette Maleszka. 2021a. How Facebook Users Experience Political Disagreements and Make Decisions About the Political Homogenization of Their Online Network. International Journal of Communication 15: 20. [Google Scholar]
  55. Neubaum, German, Manuel Cargnino, Stephan Winter, and Shira Dvir-Gvirsman. 2021b. “You’re still worth it”: The moral and relational context of politically motivated unfriending decisions in online networks. PLoS ONE 16: e0243049. [Google Scholar] [CrossRef] [PubMed]
  56. Pariser, Eli. 2011. The Filter Bubble: What the Internet is Hiding from You. London: Penguin UK. [Google Scholar]
  57. Rainie, Lee, and Aaron Smith. 2012. Social Networking Sites and Politics. Washington, DC: Pew Research Center. [Google Scholar]
  58. Rösner, Leonie, Stephan Winter, and Nicole C. Krämer. 2016. Dangerous minds? Effects of uncivil online comments on aggressive cognitions, emotions, and behavior. Computers in Human Behavior 58: 461–70. [Google Scholar] [CrossRef]
  59. Sibona, Christopher. 2014. Unfriending on Facebook: Context collapse and unfriending behaviors. Paper presented at 2014 47th Hawaii International Conference on System Science, Waikoloa, HI, USA, January 6–9; Piscataway Township: IEEE, pp. 1676–85. [Google Scholar]
  60. Sibona, Christopher, and Steven Walczak. 2011. Unfriending on Facebook: Friend request and online/offline behavior analysis. Paper presented at 2011 44th Hawaii International Conference on System Sciences, Kauai, HI, USA, January 4–7; Piscataway Township: IEEE, pp. 1–10. [Google Scholar]
  61. Skoric, Marko M., Qinfeng Zhu, and Jin-Hsuan Tammy Lin. 2018. What predicts selective avoidance on social media? A study of political unfriending in Hong Kong and Taiwan. American Behavioral Scientist 62: 1097–115. [Google Scholar] [CrossRef]
  62. Skvoretz, John, and David Willer. 1993. Exclusion and power: A test of four theories or power in exchange networks. American Sociological Review 58: 801–18. [Google Scholar] [CrossRef] [Green Version]
  63. Sunstein, Cass R. 2018. # Republic: Divided Democracy in the Age of Social Media. Princeton: Princeton University Press. [Google Scholar]
  64. Surma, Jerzy. 2016. Social exchange in online social networks: The reciprocity phenomenon on Facebook. Computer Communications 73: 342–46. [Google Scholar] [CrossRef]
  65. Valenzuela, Sebastián, Teresa Correa, and Homero Gil de Zúñiga. 2018. Ties, Likes, and Tweets: Using Strong and Weak Ties to Explain Differences in Protest Participation Across Facebook and Twitter Use. Political Communication 35: 117–34. [Google Scholar] [CrossRef]
  66. Van Dijck, José. 2013. The Culture of Connectivity: A Critical History of Social Media, 1st ed. Oxford: Oxford University Press. [Google Scholar]
  67. Vargo, Chris J., and Toby Hopp. 2017. Socioeconomic status, social capital, and partisan polarity as predictors of political incivility on Twitter: A congressional district-level analysis. Social Science Computer Review 35: 10–32. [Google Scholar] [CrossRef]
  68. Wanga, Xuequn, and Zilong Liu. 2019. Online engagement in social media: A cross-cultural comparison. Computers in Human Behavior 97: 137–50. [Google Scholar] [CrossRef]
  69. Wasko, Molly McLure, and Samer Faraj. 2005. Why should I share? Examining social capital and knowledge contribution in electronic networks of practice. MIS Quarterly 29: 35–57. [Google Scholar] [CrossRef]
  70. Xiao, Huilin, Weifeng Li, Xubin Cao, and Zongming Tang. 2012. The online social networks on knowledge exchange: Online social identity, social tie and culture orientation. Journal of Global Information Technology Management 15: 4–24. [Google Scholar] [CrossRef]
  71. Yan, Zhijun, Tianmei Wang, Yi Chen, and Han Zhang. 2016. Knowledge sharing in online health communities: A social exchange theory perspective. Information & Management 53: 643–53. [Google Scholar]
  72. Yang, JungHwan, Matthew Barnidge, and Hernando Rojas. 2017. The politics of “Unfriending”: User filtration in response to political disagreement on social media. Computers in Human Behavior 70: 22–29. [Google Scholar] [CrossRef]
  73. Yoo, Joseph, Yee Man Margaret Ng, and Thomas Johnson. 2018. Social networking site as a Political Filtering Machine: Predicting the Act of Political Unfriending and Hiding on Social Networking Sites. The Journal of Social Media in Society 7: 92–119. [Google Scholar]
  74. Zhu, Qinfeng, Marko Skoric, and Fei Shen. 2017. I shield myself from thee: Selective avoidance on social media during political protests. Political Communication 34: 112–31. [Google Scholar] [CrossRef]
Table 1. Characteristics of the sample.
Table 1. Characteristics of the sample.
ParticipantsGenderAgeEmployment StatusEducation
3056% female (n = 17)18–24 (n = 28)Student (n = 28)High school or less (n = 8)
44% male (n = 13)25–30 (n = 2)Unemployed (n = 1)Some college (n = 19)
--Working (n = 1)College degree or more (n = 3)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jordá, B.; Cañedo, A.; Bene, M.; Goyanes, M. Out-of-Place Content: How Repetitive, Offensive, and Opinion-Challenging Social Media Posts Shape Users’ Unfriending Strategies in Spain. Soc. Sci. 2021, 10, 460. https://doi.org/10.3390/socsci10120460

AMA Style

Jordá B, Cañedo A, Bene M, Goyanes M. Out-of-Place Content: How Repetitive, Offensive, and Opinion-Challenging Social Media Posts Shape Users’ Unfriending Strategies in Spain. Social Sciences. 2021; 10(12):460. https://doi.org/10.3390/socsci10120460

Chicago/Turabian Style

Jordá, Beatriz, Azahara Cañedo, Márton Bene, and Manuel Goyanes. 2021. "Out-of-Place Content: How Repetitive, Offensive, and Opinion-Challenging Social Media Posts Shape Users’ Unfriending Strategies in Spain" Social Sciences 10, no. 12: 460. https://doi.org/10.3390/socsci10120460

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop