Next Article in Journal
The Social Construction of Age: Media Stigmatization of Older Adults: A Systematic Review
Previous Article in Journal
Manufacturing Legitimacy: Media Ownership and the Framing of the July 2024 Uprising in Bangladesh
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synthetic Social Alienation: The Role of Algorithm-Driven Content in Shaping Digital Discourse and User Perspectives

1
Radio, Television and Cinema, Communication Faculty, Istanbul Aydın University, 34295 Istanbul, Turkey
2
Radio, Television and Cinema, Communication Faculty, Istinye University, 34396 Istanbul, Turkey
3
Independent Researcher, 34353 Istanbul, Turkey
*
Author to whom correspondence should be addressed.
Journal. Media 2025, 6(3), 149; https://doi.org/10.3390/journalmedia6030149
Submission received: 2 July 2025 / Revised: 2 September 2025 / Accepted: 5 September 2025 / Published: 10 September 2025

Abstract

This study investigates how algorithm-driven content curation impacts mediated discourse, amplifies ideological echo chambers and alters linguistic structures in online communication. While these platforms promise connectivity, their engagement-driven mechanisms reinforce biases and fragment discourse spaces, leading to Synthetic Social Alienation (SSA). By combining discourse analysis with in-depth interviews, this study examines the algorithmic mediation of language and meaning in digital spaces, revealing how algorithms commodify attention and shape conversational patterns. In this study, four SSA patterns were identified: Algorithmic Manipulation, Digital Alienation, Platform Dependency, and Echo Chamber Effects. A hybrid dataset (180 training, 30 test samples) was used to train classification models. Among four algorithms, Support Vector Machine (SVM) achieved the highest performance (90.0% accuracy, 90.4% F1-score). Sentiment analysis revealed distinct language structures for positive (AUC = 0.994), neutral (AUC = 0.933), and negative (AUC = 0.919) expressions. SHAP and LIME analyses highlighted key features driving model decisions. The findings expose how digital platforms commodify attention and shape user discourse, underscoring the need for ethical algorithm design and regulatory oversight.

1. Introduction

In the era of algorithm-based digital media, social media platforms shape information consumption and the evolution of public discourse (Fisher & Mehozay, 2019; Saurwein & Spencer-Smith, 2021; Stark et al., 2020). Building on Marx’s concept of estrangement, digital alienation extends beyond economic production into the realms of social and intellectual interactions. The commodification of user data on social media mirrors traditional labor exploitation, as platforms monetize engagement metrics without users’ active consent (Fuchs, 2014). This dynamic underscores the systemic nature of alienation in algorithmic environments, where individuals are both the producers and products of digital systems (Mosco, 2016).
However, beyond its impact on individuals, algorithmic mediation also shapes the structure and nature of discourse itself (Klinger & Svensson, 2018; Magalhães, 2018; Milan, 2015; Rehman et al., 2024). Social media platforms are not neutral communication conduits; their recommendation algorithms curate, amplify, and suppress specific discourses based on engagement metrics (Chavanayarn, 2024; Sanseverino, 2023).
Further, echo chambers and filter bubbles limit the diversity of perspectives encountered by users. While echo chambers actively exclude dissenting viewpoints, filter bubbles arise passively through algorithmic recommendations that isolate users within homogenous content loops. These phenomena create polarized communities and reinforce biases, underscoring the need for content curation and user education transparency.
Echo chambers and filter bubbles fragment public discourse by narrowing the scope of perspectives accessible to users, often amplifying ideological polarization. Recent studies (Bok, 2023; Buhmann et al., 2020; Reviglio, 2020) highlight that these mechanisms are exacerbated by algorithmic personalization, which optimizes user engagement over informational diversity. Echo chambers and filter bubbles are often discussed interchangeably but represent distinct phenomena. Echo chambers actively discredit external viewpoints, fostering systematic distrust of outside information sources (Nguyen, 2020). Conversely, filter bubbles arise from passive exclusion, where algorithms inadvertently omit diverse perspectives, leaving users in isolated informational environments (Bruns, 2019; Pariser, 2011). Both structures can reinforce preexisting biases, but echo chambers rely on deliberate exclusion and manipulation of trust, while filter bubbles result from personalized algorithmic recommendations (Nguyen, 2020; Zimmer et al., 2019). Bruns (2017) argues that the widespread assumption that social media users are trapped in ideologically homogeneous environments. Drawing on empirical data, Bruns (2017) argues that the phenomenon of echo chambers is often overstated and lacks consistent evidence across platforms. Instead, Bruns suggests that users are frequently exposed to diverse viewpoints, though they may selectively engage with content that aligns with their beliefs.
Furthermore, social media platforms and search engines utilize algorithms to personalize user experiences by analyzing online habits and preferences. These algorithms aim to reduce information overload and enhance engagement but often lead to informational silos (Costa Netto & Maçada, 2019). Pariser (2011) argues that such systems create a “personalized universe of information” that filters out opposing views, contributing to ideological segregation. Similarly, Nguyen (2020) highlights how algorithms promote epistemic bubbles by unintentionally omitting alternative viewpoints. However, studies by Ross Arguedas et al. (2022) reveal that algorithmic curation can lead to slightly more diverse news consumption for non-partisan users, complicating the narrative of uniform adverse effects.
Algorithmic systems also shape individuals’ behavior and identity. Perez Vallejos et al. (2021) emphasize that algorithms influence user well-being by creating environments prioritizing convenience and engagement over diversity and transparency. This lack of agency further entrenches users in self-reinforcing informational ecosystems. Similarly, Costa Netto and Maçada (2019) argue that the interplay of filter bubbles and echo chambers affects identity construction by restricting individuals’ access to diverse viewpoints and fostering homogeneity.
Addressing the challenges posed by algorithmic curation requires a multifaceted approach. Increasing transparency in algorithmic operations and enhancing digital literacy are critical first steps (Zimmer et al., 2019). Policy makers must regulate platforms to ensure that algorithms prioritize diverse content exposure over profit-driven engagement. Furthermore, fostering cross-platform collaborations to promote content diversity can help mitigate the adverse effects of echo chambers and filter bubbles (Perez Vallejos et al., 2021).
This paper examines how digital commodification alters not just individual agency but also public discourse as a collective process. In this context, we discuss and introduce the Synthetic Social Alienation (SSA) concept and extends Marx’s alienation theory to contemporary digital spaces, emphasizing how algorithms commodify data and redefine interpersonal and intellectual experiences. Building on Synthetic Social Alienation, we argue that prolonged exposure to algorithm-based content creates a cognitive and emotional disconnect from diverse perspectives, critical thinking, and authentic social connections. We also argue that digital alienation manifests itself through reliance on curated feeds, leading to a narrow worldview, diminished initiative, and isolation within homogenized content ecosystems. We argue that platforms like Twitter, YouTube, and TikTok use machine learning models to prioritize content based on virality, polarization, and retention, influencing what is discussed and how. We conduct in-depth interviews with 10 people across demographics to understand this. As a result, discourse is increasingly shaped by algorithmic imperatives rather than organic human deliberation.

2. Literature Review

2.1. Alienation, Algorithms and the Digital Media Ecosystem

The concept of alienation, originally framed by Marx (1844/1978), describes the estrangement of individuals from their labor, creativity, and social relations under capitalist conditions. In digital capitalism, this estrangement now manifests within algorithmic infrastructures where user attention, interaction, and data become primary sources of value extraction (Fuchs, 2014; Zuboff, 2019). However, as Dean (2019) argues in her theory of communicative capitalism, the very infrastructure of public communication becomes subsumed under capitalist logic. Rather than enabling democratic exchange, digital networks capture communicative acts for circulation and commodification, converting participation into a depoliticized and alienating process.
This context is central to understanding Synthetic Social Alienation (SSA)—a form of estrangement emerging not through overt coercion but via affective capture, personalization, and the illusion of agency in digital spaces. SSA combines epistemic fragmentation (through personalization), emotional manipulation (via curation), and structural opacity (through algorithmic design) into a distinct modality of mediated detachment.
The concepts of filter bubbles and echo chambers are often used to illustrate algorithmic narrowing of informational diversity. However, these terms are frequently conflated. Filter bubbles refer to unintentional algorithmic personalization that shapes the informational environment often without user awareness (Pariser, 2011; Seuren, 2024), whereas echo chambers emerge from active user choices and homophily bias, forming self-reinforcing ideological communities (Sunstein, 2001). Importantly, this distinction concerns not just effects, but mechanisms: while filter bubbles are algorithmic, echo chambers are socially constructed.
To better conceptualize this, Bruns (2019) argues that echo chambers should be understood through the lens of factionalism, where users coalesce around shared identities and grievances, often intensifying polarization. This view challenges deterministic models by showing how social structures, not just algorithms, drive division. Additionally, recent empirical research demonstrates that echo chambers and filter bubbles do not always lead to decreased exposure to opposing views. Möller et al. (2020) emphasize that platform design, user behavior, and recommender logic all condition these effects, sometimes fostering diversity rather than eroding it.
Furthermore, while much literature emphasizes user passivity and algorithmic control, this view has been problematized by the emerging concept of algorithmic resistance. Bonini and Treré (2025) highlight how users actively subvert, manipulate, or negotiate algorithmic systems—for instance by adjusting behavior to “trick” recommendation engines. Such practices reveal agency within constraint and complicate notions of total alienation.
However, this paper acknowledges a key conceptual leap: it assumes that individual-level alienation scales automatically to societal-level fragmentation. This assumption needs to be supported with empirical evidence. Kossowska et al. (2023) show that while individual digital behaviors may indicate signs of isolation or detachment, the translation of these effects into broader communal or political disaffection is neither linear nor uniform. The claim that algorithmic alienation leads to societal atomization must therefore be refined and qualified with attention to context and intervening variables.
Finally, claims about content diversity should be made with conceptual precision. The term “diverse content” is often underspecified. As Loecherbach et al. (2020) argue, diversity can be operationalized across multiple axes: source diversity, opinion diversity, representational diversity, etc. Without clarifying what kind of diversity is at stake—and how it is measured—critiques of algorithmic recommendation remain ambiguous.

2.2. The Sociopsychological Dimensions of Algorithmic Alienation

The psychological, emotional, and social effects of algorithmic systems extend far beyond informational filtering. As Seaver (2017) compellingly argues, recommender systems function as “captivating traps” that not only shape what users engage with, but also subtly transform how they perceive their agency, participation, and social reality. These traps deepen our understanding of algorithmic alienation—not merely as an epistemic phenomenon, but as an affective and ontological condition.
Although social media platforms promise connectivity, they frequently deliver curated and performatively optimized forms of exposure that intensify social comparison, anxiety, and fragmentation (Taylor, 2022). Repeated interactions with emotionally charged and idealized content can erode self-esteem, induce feelings of inadequacy, and undermine genuine interpersonal connection (Festinger, 1954; Baumeister & Leary, 1995). These dynamics echo Marx’s notion of estrangement—not from material production, but from authentic social relations and self-understanding.
Furthermore, the commodification of user activity—likes, shares, comments, and attention—has become a measurable component of digital capitalism (Zuboff, 2019). Users operate simultaneously as producers of data and as monetized targets within feedback loops that prioritize algorithmic value over meaningful interaction or democratic deliberation. Importantly, this logic operates within infrastructures of surveillance and predictive analytics that have evolved significantly since early digital labor critiques (e.g., Terranova, 2000). Contemporary scholars such as Andrejevic (2019) and Couldry and Mejias (2019) underscore the need to ground such critiques in empirical, post-2010 platform studies to accurately depict the mechanisms of platform capitalism.
Yet, users are not passive victims. As Bonini and Treré (2025) point out, practices of algorithmic resistance—such as subverting platform logic, gaming visibility, or tactically withdrawing—demonstrate user agency within constrained environments. Understanding alienation in the digital age thus requires a dialectical approach: it is shaped by structural asymmetries, but also actively negotiated and occasionally contested by the very subjects it seeks to enclose.

2.3. Rethinking Synthetic Social Alienation

Building on these frameworks, this study proposes the concept of Synthetic Social Alienation (SSA) to describe a form of estrangement unique to algorithmic environments. Unlike classical alienation, SSA occurs not through coercion or force, but through affective capture, personalization, and the illusion of participation. SSA combines epistemic fragmentation (filtering), emotional manipulation (curation), and structural opacity (algorithmic black-boxing) into a new modality of mediated detachment.
However, for SSA to serve as a meaningful analytic concept, it must be situated within and against the broader literature on digital estrangement, recommendation systems, and media diversity. This requires clearer definitions and empirical referents. Future iterations of this framework may also engage with ongoing debates in digital sociology, such as the tensions between autonomy and automation, or participation and prediction.

3. Materials and Methods

This study conceptualizes digital alienation not as an inevitable or deterministic outcome of technological systems but as a socially constructed and negotiated experience shaped by algorithmic architectures and individual agency. While acknowledging the significant role of algorithmic mediation in digital communication, this study avoids reductive technological determinism by foregrounding the complex interplay between external (platformic) structures and internal (user) interpretations and actions.

3.1. Research Questions and Methodological Orientation

To investigate how alienation emerges within algorithmically curated social media environments, this study is guided by two main research questions and two sub-questions:
RQ1: How do users experience and describe algorithmic influence on their agency, affect, and discursive participation?
RQ2: What forms of alienation—emotional, epistemic, or social—manifest through users’ interaction with recommendation-driven platforms?
Sub-questions:
RQ2a: What linguistic and affective patterns appear in users’ descriptions of algorithmic curation and platform dynamics?
RQ2b: How does Synthetic Social Alienation (SSA) take shape in professional, political, and activist digital discourse?
These questions were operationalized through a hybrid qualitative design, combining in-depth semi-structured interviews with discourse and sentiment analysis. The aim is to understand both users’ subjective meaning making and the broader structural dynamics embedded in algorithmic recommendation systems.

3.2. Interview Structure, Analytical Procedures, and Bias Mitigation

The interview guide was inspired by previous studies on algorithmic media experiences (Shin & Park, 2019; Bucher, 2018) and adapted to probe specific themes of agency, alienation, and reflexivity. To minimize bias and social desirability, neutral phrasing was used (e.g., “Can you describe your typical interactions with the content you see?” rather than “Do algorithms manipulate you?”). Interviews were transcribed and coded using a mixed inductive-deductive approach, ensuring both theoretical alignment and openness to emerging categories.
To triangulate user narratives with platform-level dynamics, a discourse analysis was conducted on trending topics and comment streams from Twitter, TikTok, and YouTube. A purposive sampling strategy was adopted: we collected the top 500 comments for each platform from posts associated with trending hashtags during a three-month observation window (n = 1500). Selection was based on visibility (most-liked comments) and temporal relevance (comments posted within the trending period), ensuring coverage of both high-engagement and emergent discourse. Lexical frequency shifts: tracking keyword emergence across time. This analysis involved three layers:
Lexical frequency shifts: tracking keyword emergence across time
Sentiment analysis: categorizing emotional tone via natural language processing (NLP)
Topic fragmentation: identifying how discussions diverged into isolated discourse clusters
Discourse patterns were cross-validated with interview data to identify markers of Synthetic Social Alienation (SSA)—a concept developed by the authors to describe hybrid forms of emotional, epistemic, and structural estrangement intensified by algorithmic mediation.
A multi-step thematic analysis procedure was employed to operationalize SSA. First, open coding was used to identify recurring expressions of disconnection, distrust, or exhaustion. Second, these codes were grouped into broader discursive motifs, which were then refined into “SSA indicators.” Finally, axial coding aligned these categories with theoretical constructs of estrangement, ensuring conceptual rigor. This process, which was conducted iteratively by three coders, enhanced reliability and allowed emergent categories to complement deductive theoretical anchors.
To further operationalize SSA, we trained a sentiment classifier using a small labeled dataset derived from user comments. The model achieved 80% accuracy overall, with 100% recall in the negative class and slightly lower precision in the positive class. Macro and weighted F1 scores of 0.80 indicated balanced performance, though we acknowledge that the limited dataset size constrains generalizability. Further validation with larger corpora is needed.

3.3. Limitations and Methodological Reflexivity

While our triangulated approach enhances this study’s robustness, certain limitations remain. The small sample size and platform-specific focus may limit broader applicability. Interview findings are self-reported and thus subject to interpretation biases and platform opacity. Additionally, the SSA framework is still in early conceptual development and requires further empirical testing and theoretical grounding within the literature on critical algorithm studies (e.g., Gillespie, 2014; Kitchin, 2021).
Finally, the visual representation of SSA dimensions will be revised in the Results section to more effectively illustrate the interrelations between affective, cognitive, and discursive alienation, beyond the tabular format currently in use.

3.4. Profile of the Interviewees and Methodological Considerations

This study draws on ten semi-structured interviews with frequent users of algorithm-driven platforms such as TikTok, YouTube, Instagram, Twitter (X), and Facebook. Although the number may appear modest, this sample size aligns with the concept of information power (Malterud et al., 2016), where the adequacy of qualitative sampling depends on the richness of the data, study aim specificity, and the quality of the dialogue. During data collection, we observed recurring themes and saturation points—especially in participants’ perceptions of algorithmic visibility, ideological curation, and emotional impact—which justified the decision to conclude at ten interviews (Guest et al., 2006).
Participants were selected based on a combination of theoretical and convenience sampling, with attention to diversity in age, gender, socioeconomic background, professional orientation, and platform usage. Voluntary participation was obtained through digital recruitment, and informed consent was secured in line with institutional ethical guidelines. Each interview lasted between 60 and 90 min, conducted over a period of two months (March–May 2025), either via video call or encrypted messaging, depending on participant preference and privacy concerns.
The interviewees represent diverse digital media users, each engaging with social platforms differently based on age, profession, and interests: (i) Younger participants, such as a 20-year-old college student and a 16-year-old high school student, primarily use TikTok and Instagram for entertainment and social interaction. (ii) Professionals, including a 38-year-old marketing expert and a 26-year-old AI specialist, rely on LinkedIn and Twitter for career growth. A 26-year-old gamer connects with the gaming community through YouTube and Twitch, while a 42-year-old journalist turns to Twitter for real-time news. A 29-year-old from a rural area uses Facebook to stay in touch with family and friends, whereas a 32-year-old stay-at-home parent engages in parenting communities on Facebook and Instagram. Meanwhile, a 22-year-old social activist leverages social media to promote causes and network with like-minded individuals, and a 65-year-old retiree relies on Facebook and YouTube for news and family connections. These participants, labeled as Profile 1 through Profile 10, illustrate the diverse ways individuals interact with algorithmic content across different platforms.
Describing how algorithm-driven content curation impacts mediated discourse, reinforces ideological echo chambers and alienates users is complex and challenging. Therefore, this study combines discourse analysis with in-depth interviews to examine the algorithmic mediation of language and meaning in digital spaces. We also attempt to understand how algorithms commodify attention and shape conversational patterns. Thus, we conducted in-depth interviews with active users who are hyper-conformed to social media. Due to gender, self-relationships, and disclosures, each participant had to volunteer, allowing the study researchers to interview only volunteers (as discussed and used by Gürkan et al., 2024). Therefore, as a convenience sample, our sample was not representative of the entire population (all social media users) but did provide theoretical insight into active social media users within this population.
The interview questions are seen in Table 1:
These interview questions aim to understand how users engage with social media algorithms and how these systems influence their content consumption, perceptions, and interactions. By exploring users’ motivations, awareness of algorithmic curation, and experiences with content recommendations, we can assess how algorithms shape discourse, reinforce biases, or facilitate exposure to diverse viewpoints. Further, these questions help examine whether users actively resist or modify algorithmic recommendations and how personalized content affects their understanding of different perspectives and social connections.

4. Results

Findings from the interview data suggest that users experience, we suggest a new term Synthetic Social Alienation (SSA) through algorithmically reinforced discourse patterns. This discourse analysis shows how algorithmic mediation shapes users’ engagement, perceptions, and interactions on social media. By examining linguistic patterns, framing mechanisms, and power dynamics, we try to identify how users conceptualize their agency, emotions, and visibility within algorithmic environments. Moreover, discourse analysis further reveals concrete shifts in conversational structure due to algorithmic mediation. Key insights include as follows:
(a) Echo Chamber Effects and Repetitive Language: (i) Lexical Choices: Users often describe their experience with phrases like “I always see the same thing,” “It’s just an endless loop,” or “I feel trapped in this content.” (ii) Discourse Pattern: The use of deterministic and passive language suggests a perception of limited agency and a structured engagement dictated by the algorithm. (iii) Implication: The algorithmic reinforcement of familiar content creates a discourse of inevitability, reinforcing ideological silos.
(b) Emotional Amplification and Speech Economy: (i) Lexical Compression: Emotional responses such as “It’s addictive,” “It makes me anxious,” or “I love the recommendations” reveal the amplification of emotions through algorithmic exposure. (ii) Binary Framing: Users tend to categorize content as either highly engaging or frustrating, indicating that algorithmic curation fosters polarized emotional experiences. (iii) Implication: Algorithmic mediation intensifies emotional discourse, pushing users toward more extreme affective responses.
(c) Perceptions of Control and Manipulation: (i) Passive Engagement: Users (Profiles 1, 2, 3, 5, 6) use phrases like “I have no control,” “It just shows me things,” or “I feel stuck,” indicating a discourse of helplessness. (ii) Active Resistance: Users (Profiles 4, 7, 8, 9, 10) state, “I try to break the algorithm,” “I engage critically,” or “I search for alternative content,” suggesting a counter-discourse of agency. (iii) Implication: The contrast between passive and active language highlights a divide in algorithmic literacy and perceived agency.
(d) Algorithmic Dependence in Professional and Activist Discourse: (i) Paradoxical Framing: Social media professionals and activists express both reliance and frustration (e.g., “I need the algorithm to reach people, but it also limits me”). (ii) Linguistic Strategies: Their discourse balances pragmatic engagement (“I optimize for visibility”) with resistance (“I fight against suppression”). (iii) Implication: Algorithmic mediation structures professional discourse into a negotiation between visibility and control.
(e) Shaping Worldviews and Intellectual Engagement: (i) Polarized Language: Users describe opposing views as “nonsense,” “toxic,” or “dangerous,” reflecting how algorithmic exposure frames alternative perspectives as extreme or invalid. (ii) Intellectual Confinement: The lack of discursive variation (e.g., “I don’t see new perspectives,” “It’s always reinforcing my beliefs”) suggests a narrowing of intellectual engagement. (iii) Implication: Algorithmic discourse structures reality by reinforcing existing ideological positions rather than fostering critical thinking.
(f) Visibility Inequality and Speech Hierarchies: (i) Hierarchical Discourse: Users acknowledge algorithmic bias in visibility, stating, “Only certain voices get heard,” “Smaller creators disappear,” or “It favors mainstream content.” (ii) Resistance Strategies: Users who recognize visibility hierarchies attempt countermeasures like manual searches and engagement strategies. (iii) Implication: Social media algorithms create a digital speech economy where visibility is a currency governed by algorithmic logic.
These findings underscore how algorithmic mediation shapes what is seen and how discourse is structured, framed, and sustained. The tables formed with the collected data are organized into three key areas: (i) user motivations and experiences with algorithmic content, (ii) perceptions of Synthetic Social Alienation (SSA), and (iii) strategies for managing algorithmic content. The common themes from the interviewees’ responses are shown in Table 2, which includes themes, everyday observations, and examples. Table 2 categorizes the interviewees’ responses based on common themes related to their social media use, highlighting the motivations behind their engagement with platforms, their awareness of algorithms, and how they perceive the content they encounter. This table reveals varying levels of awareness, from passive content consumers to more critical, actively engaged users, reflecting diverse experiences with algorithmic personalization.
According to the research data, younger users demonstrate a strong dependence on platforms like TikTok and Instagram for entertainment and social validation. While they enjoy the convenience of algorithmic recommendations, their awareness of algorithmic manipulation is limited. This causes alienation by design, as their digital environments become confined to repetitive, dopamine-driven content loops.
Profiles such as the marketing professional, gamer, and AI expert highlight the professional reliance on algorithms. While these users are more critical of algorithms, they experience alienation through dependency. Their success often hinges on navigating and appeasing opaque systems, leading to a paradoxical relationship with platforms.
The activist highlights the challenges algorithms pose in advocating for change. Algorithms favor polarizing or viral content, often sidelining nuanced discussions. This results in alienation through selective amplification, where the system promotes visibility but compromises the genuineness and reach of its message.
Stay-at-home parents and older users share a more passive relationship with social media. Their usage focuses on personal connection and community-building, but they report feelings of alienation through algorithmic noise, where irrelevant or overly targeted recommendations overshadow genuine interactions.
Across all profiles, there is a recurring theme of algorithms reinforcing existing beliefs and preferences. This alienation through homogenization restricts exposure to diverse perspectives, fostering ideological silos and distorting perceptions of broader social realities.
We can categorize the answers of the interviewees according to the sense of agency, belonging, and intellectual engagement (Table 3):
As seen in Table 3, Profile 1, 2, 3, 5, and 8 express a limited sense of agency, feeling constrained by the algorithm’s repeated content. They acknowledge that while algorithms present tailored content, it feels repetitive, and they find themselves in cycles of seeing the same types of content. These users feel somewhat stuck within a predictable, repetitive loop. They desire more control or new content but are often overwhelmed by algorithmic predictability.
Profile 4, 6, 7, 9, and 10 engage with the algorithm actively and may feel more in control of the content they see. They engage more purposefully by interacting with content to expand their feeds or shift algorithms toward diverse topics. These users maintain a stronger sense of agency by curating their experiences, although they still acknowledge the algorithm’s influence.
Profiles 1, 2, 3, 4, 5, and 6 focus on a sense of belonging within spaces that align with their interests, where they primarily interact with like-minded people. This reinforces their sense of connection but might limit exposure to other viewpoints. These users feel comfortable in their spaces but are less likely to encounter diverse perspectives, which may lead to limited intellectual growth.
Profiles 7, 8, 9, and 10 express a more varied sense of belonging, seeking diverse communities and opinions. However, they also note that algorithmic personalization often reinforces their current views, limiting their exposure to opposing perspectives. These users actively try to break out of the echo chamber, yet the algorithm still tends to limit interactions with diverse groups or ideas.
Profiles 1, 2, 3, 5, and 6 experience limited intellectual engagement due to the repetitive nature of content. They often scroll past content that challenges their opinions or fail to find enough thought-provoking material that encourages deeper engagement. These users may feel intellectually stagnant or disconnected from new ideas, as the algorithm primarily delivers content they already agree with.
Profiles 4, 7, 8, 9, and 10 show more intellectual engagement, actively seeking diverse content and engaging with challenging material. However, they acknowledge the algorithm’s tendency to push content that is still aligned with their preferences, making it hard to encounter truly diverse ideas. These users engage more critically with the content they see but still recognize that the algorithm limits their ability to access various intellectual challenges.
Users in Group 1 (Profiles 1, 2, 3, 5, 6) face frustration with the repetitive nature of algorithmic content, leading to a limited sense of agency and intellectual engagement, users in Group 2 (Profiles 4, 7, 8, 9, 10) feel more empowered to engage with the algorithm, curating their content more actively but still experiencing limitations in intellectual diversity due to the algorithm’s inherent bias towards familiar content.
Table 4 focuses on Synthetic Social Alienation (SSA), examining how participants relate to feelings of isolation, disconnection, or exclusion resulting from algorithm-based content recommendations. It highlights whether participants feel disconnected from broader societal contexts or communities due to the personalization of content and their inability to quickly encounter opposing viewpoints or diverse perspectives.
Table 5 shows the different strategies users employ to manage the influence of algorithms on their social media experiences. The techniques vary from manual content curation to changing platforms or reducing screen time. They could provide insights into the level of agency users feel they have in curating their digital experiences. This table focuses on how participants deal with content recommendations, personalization, and the perceived limitations of their social media and algorithm-based platforms.

4.1. Dataset for Sentiment Analyses

The data analyzed in this study were obtained from two Microsoft Word documents (.docx): one labeled (for training) and one unlabeled (for testing). Each interview starts with the prefix “Interviewee:“ and in the training file, each sentence is labeled with an emotion label: “Positive”, “Negative” or “Notr”.
  • interviews_train.docx: Labeled training data. Each interview starts with the phrase “Interviewee:“ and ends with the emotion labels “Positive”, “Negative” or “Notr”.
  • interviews_test.docx: Unlabeled test data. Used for estimation and SSA analysis.

4.2. Data Preprocessing

The text data was cleaned in the following steps:
  • Lowercase conversion,
  • Removal of numbers and punctuation,
  • Extraction of stopwords (stopwords.words),
  • Retention of words longer than three letters only, and
  • These operations are performed with the clean_text() function.

4.3. SSA and Emotional Keyword Matching

Two special word lists are defined:
  • SSA_PHRASES: Statements representing social alienation (Synthetic Social Alienation) in the digital environment
  • EMOTION_WORDS: Words reflecting positive and negative emotions
Sentences were scanned against these lists to determine how many matches they contained. The results were exported as CSV and graphs.
Synthetic Social Alienation (SSA) defines types of artificial alienation that users experience on digital platforms, such as algorithmic steering, social disconnection, and belief reinforcement. NLP methods reveal these phenomena at the word level, enabling the systematic classification and interpretation of SSA. This makes the hidden alienation dynamics in digital discourse scientifically visible. This study examines the phenomenon of Synthetic Social Alienation (SSA) using a mixed-methods approach that combines qualitative interview data with natural language processing (NLP) techniques. The research is built upon Marx’s theory of alienation and contemporary digital media studies, aiming to develop a classification system capable of identifying SSA patterns in social media discourse.

4.4. Findings and Visualization

In the digital context, alienation manifests itself in four different patterns:
-
Algorithmic Manipulation: Systematic control of content visibility through opaque algorithmic processes (Gillespie, 2014).
-
Digital Alienation: Psychological disconnection arising from the replacement of real human interaction with mediated communication (Turkle, 2015).
-
Platform Dependency: Behavioral dependence on digital platforms for social approval and information consumption (van Dijck, 2013).
-
Echo Chamber Effects: Reinforcement of existing beliefs through algorithmic filtering and selective exposure (Pariser, 2011).
This framework expands Dean’s (2019) theory of communicative capitalism, which argues that digital platforms transform communication into a form of labor that generates value for companies. Additionally, Bonini and Treré (2025) contribute to the concept of algorithmic resistance, while Couldry and Mejias (2019) explain how digital platforms reshape social practices through the concept of deep mediatization.
In this study, original interview data and carefully prepared synthetic examples were combined to create a hybrid dataset. The original dataset consists of a total of 90 responses from 10 participants who answered 9 structured questions about their social media experiences. Although the sample size is limited, it provides a basis for investigating SSA patterns in digital communication. To address the class imbalance caused by the small number of data points and to improve the classification system, 90 additional synthetic examples were generated for the training dataset using templates prepared under expert guidance. These templates were based on linguistic patterns identified in the original interviews and the digital alienation literature. Each synthetic example was verified for theoretical consistency according to SSA typologies.
As a result, the dataset was prepared to include 180 training examples (90 original + 90 synthetic) and 30 test examples. The test set consists of 15 original responses and 15 synthetic examples, enabling the model to be evaluated on both real and generated data. This approach acknowledges the limitations of a small original sample while providing sufficient data for initial model development.
The text preprocessing process is designed to reduce noise while preserving meaningful linguistic patterns. In this process:
Abbreviations were expanded (e.g., “I’m” → “I am”),
Punctuation marks and special characters were removed,
An expanded stopword list containing domain-specific meaningless words such as “im,” “makes,” and “checking” was applied,
Words consisting of three or fewer letters and numerical sequences were filtered out.
TF-IDF and CountVectorizer were used together in the feature extraction phase. The TF-IDF vectorizer was configured with a maximum of 300 features (max_features = 300) and a bigram context (ngram_range = (1,2)), while the CountVectorizer provided 200 additional features. This resulted in a 500-dimensional feature space.
Four machine learning algorithms were evaluated: Logistic Regression, Random Forest, Gradient Boosting, and Support Vector Machine (SVM). Class imbalance was resolved using SMOTE (k_neighbors = 5). Hyperparameter optimization was performed using GridSearchCV with 3-fold cross-validation and the F1-weighted metric.
The SVM model showed the highest performance with 90.0% accuracy and 90.4% F1-score. Gradient Boosting achieved 86.7% accuracy, while Logistic Regression and Random Forest achieved 83.3% accuracy. These results demonstrate that the feature engineering approach used successfully captured linguistic patterns related to SSA.
The ROC analysis revealed AUC = 0.994 for positive SSA statements, AUC = 0.933 for neutral statements, and AUC = 0.919 for negative statements. These high AUC values reflect the controlled structure of the dataset and the distinctiveness of SSA-specific linguistic patterns. Performance differences between categories align with theoretical expectations regarding how users express different types of digital alienation experiences.
This framework as seen in Figure 1 illustrates four types of digital alienation (Algorithmic Manipulation, Digital Alienation, Platform Dependency, Echo Chamber Effects) and their relationship with digital platforms. Each type is positioned around a platform-centric structure, and interaction pathways are shown. These structures are not entirely separate; they can overlap and reinforce each other. The visual presents a systematic and theoretically grounded approach to defining forms of digital alienation.
Four machine learning models were compared as seen in Figure 2:
The highest accuracy and F1 score belong to the SVM model (90.0% and 90.4%).
This is followed by Gradient Boosting (86.7%), Logistic Regression (83.3%), and Random Forest (83.3%).
The success of SVM is attributed to its ability to better capture non-linear relationships in high-dimensional data spaces. These results demonstrate the effectiveness of the feature engineering methods used and highlight the important role of algorithm selection in classification performance.
The model produced separate AUC scores for different types of SSA expressions in Figure 3:
Positive expressions: AUC = 0.994 → The most distinct and easily identifiable category in terms of language.
Neutral expressions: AUC = 0.933 → Moderate discrimination power due to ambiguous or complex emotions.
Negative statements: AUC = 0.919 → The most difficult group to distinguish due to contextual and implicit patterns.
This distribution proves that the language used by users to express different SSA forms shows diversity in line with theoretical expectations.
SHAP analysis was used to increase the model’s explainability. SHAP values revealed that terms related to the algorithm are more effective in positive SSA classification, while terms related to control are more effective in negative SSA classification.
Additionally, LIME analysis provided local explanations for individual predictions and showed which words had the most significant impact on the model’s decisions.

5. Discussion

As mentioned before, this study proposes a new term Synthetic Social Alienation (SSA), which examines how algorithm-based systems reshape social connections and individual identities. SSA highlights the nuanced impacts of algorithmic environments, focusing on simulated relationships, the commodification of engagement, and the erosion of realness. These dynamics illustrate how algorithms restructure personal and intellectual interactions, offering a lens through which Marx’s theories of alienation apply to digital spaces (Berry, 2014; Poloni, 2024).
The first dynamic is simulated relationships, in which users engage with curated content and personas that reflect algorithmic priorities rather than organic social interactions, fostering a detachment from real connections. This commodification of engagement is the second dynamic, where social bonds and interactions are reduced to data points such as likes, shares, and comments. These are exploited for economic gain at the expense of genuine relational depth. Finally, performative and curated digital personas disconnect individuals from their true selves and others, leading to a broader sense of alienation within homogenized content ecosystems and erosion of realness.
The theory of alienation provides a foundational lens for understanding algorithmic detachment syndrome (ADS), an essential part of SSA. Alienation arises from individuals’ estrangement from their labor, the products they create, their fellow humans, and ultimately themselves within a capitalist system (Marx, 1844/1978). In algorithmic media, this alienation is not merely economic but extends into the cognitive and social realms, as individuals become estranged from actual social experiences and diverse perspectives.
The algorithms commodify their engagement for corporate profit, mirroring Marx’s concept of labor exploitation. Algorithms prioritize engagement metrics, fostering echo chambers and fragmenting communal bonds. Users experience connections shaped by artificial mechanisms rather than genuine human interaction—algorithmic feedback loops foster curated digital personas, estranging individuals from their original identities. Social media platforms rely on user-generated content and interactions, which are captured as data and sold to advertisers. Terranova (2000) highlights how this unpaid labor is central to the digital economy, reinforcing systemic exploitation.
In the context of ADS, users unknowingly contribute to their detachment. Their time and effort engaging with algorithmically curated content reinforces the platforms’ profit models while deepening their cognitive narrowing. ADS thus aligns with the commodification of user activity, illustrating how digital labor extends the scope of alienation theory into the algorithmic age.
Zuboff (2019) argues that these systems extract behavioral data and manipulate user behavior to generate predictable engagement patterns, ensuring profitability. Surveillance capitalism intensifies ADS by stripping users of their autonomy. Algorithms dictate what content individuals see, effectively shaping their thoughts and interactions. This constant manipulation fosters a sense of detachment as users lose control over their digital environments. The resulting alienation is compounded by the inability to escape these systems, which are embedded into the fabric of modern life.
van Dijck (2013) critiques the culture of connectivity in which social media platforms simulate social interactions through algorithmic mechanisms and argues that these platforms commodify relationships by prioritizing content that maximizes engagement rather than fostering meaningful connections.
ADS draws on van Dijck’s concept of synthetic engagement to describe the alienation users feel when algorithms mediate their interactions. Rather than engaging with content or people authentically, users interact within a synthetic ecosystem designed to generate revenue. This synthetic nature of engagement amplifies the disconnection from genuine social experiences, reinforcing the core elements of ADS.
Algorithms designed to personalize user experiences often trap individuals in echo chambers, repeatedly exposing them to similar viewpoints and content. This limits exposure to diverse perspectives and fragments of social cohesion. SSA encapsulates the psychological and social impact of this narrowing of engagement, manifesting as isolation and estrangement in a hyperconnected world.
The concept of Synthetic Social Alienation (SSA) is intricately tied to the dynamics of discourse. Discourse, as the structured way language conveys meaning and constructs social reality (Foucault, 1980), is central to understanding how algorithm-based environments shape user experiences. Through content curation by algorithms, social media platforms influence the discursive landscapes users inhabit by amplifying dominant narratives while marginalizing others (Pariser, 2011; Nguyen, 2020).
The feedback loops algorithms create prioritize content that maximizes user engagement over diversity, forming echo chambers where certain discourses are reinforced and counter-narratives are excluded (Bruns, 2019; Flaxman et al., 2016). This narrowing of discourse limits intellectual engagement and fosters a sense of disconnection from broader societal conversations.
The contextual nature of algorithmic systems further compounds SSA. Social media platforms operate within specific sociocultural contexts, shaping and being shaped by their users’ values, norms, and ideologies. Algorithms reflect and reinforce these contexts by selectively curating content that aligns with users’ preexisting beliefs, perpetuating systemic biases and exacerbating polarization (van Dijck, 2013; Zuboff, 2019). In politically polarized environments, for instance, users are more likely to encounter content that reinforces their ideological stances, deepening divisions and reducing opportunities for cross-cutting dialogue (Nguyen, 2020).
The commodification of discourse on social media also plays a significant role in SSA. Algorithms designed to maximize engagement favor sensationalist and emotionally charged content, which distorts reality and undermines critical discourse (Zimmer et al., 2019). This aligns with the theory of alienation, where users are both producers and products of these systems, estranged from genuine interactions and intellectual diversity (Marx, 1844/1978).
Moreover, social media platforms have fragmented the public sphere by creating spaces where discourse is context-dependent and user-generated content lacks consistent editorial standards (Couldry & Mejias, 2019). This fragmentation intensifies SSA by complicating users’ ability to navigate polarized and complex information landscapes.
SSA provides a framework to analyze how algorithm-driven environments alter mediated discourse, emphasizing three key dynamics:
(i) Discursive Fragmentation: Algorithmic prioritization of engagement-based content creates ideological echo chambers, where exposure to counter-discourses is minimized. This limits the diversity of perspectives users encounter, reinforcing epistemic closure.
(ii) Lexical and Rhetorical Shifts: Algorithms favor sensationalism and emotionally charged content, leading to linguistic homogenization where discourse becomes simplified, reactionary, and performative rather than nuanced and deliberative.
(iii) Algorithmic Visibility and Speech Economy: Algorithmic curation determines which voices gain prominence and which are suppressed, shaping power dynamics in digital conversations. Emerging discourses are often dictated by platform incentives rather than organic user engagement.
Table 6 summarizes Synthetic Social Alienation (SSA).
On the other hand, if we interpret this study with the knowledge that it is aimed at the instagram recommendation system:
The findings indicate that machine learning models can effectively identify SSA patterns in digital communication. However, although this study was supplemented with synthetic data to address the limitation of sample size, it is clear that more robust results could be obtained in scenarios with a larger sample size. Future studies should aim to track the evolution of SSA patterns over time using larger and multilingual datasets, cross-platform analysis, and longitudinal studies.
The full implementation (including synthetic data generation templates and evaluation metrics) is available at: https://github.com/deregulcicek/haybik (4 August 2025).
This repository contains detailed documentation and instructions for reproduction.
This work presents the first computational approach to analyzing Synthetic Social Alienation in social media using natural language processing methods. The results show that machine learning models can effectively identify SSA patterns. However, the limitations of the current dataset should be considered, and future work should be supported by larger and more diverse datasets.

6. Conclusions

This study demonstrates that Synthetic Social Alienation (SSA) extends beyond individual psychological responses to fundamentally reshape public discourse in algorithmically mediated environments. Through discourse analysis and interview data, the research reveals how algorithmic curation amplifies emotionally charged content, fragments conversations into ideological silos, and erodes deliberative democratic dialogue. This aligns with prior research on algorithmic visibility and speech inequality (Gillespie, 2014; Noble, 2018), confirming that users are differentially empowered to speak and be heard, based on opaque platform incentives rather than the merit of ideas.
Interview responses consistently indicated that users feel compelled to measure their own lives against algorithmically curated representations—leading to anxiety, inadequacy, and social withdrawal. These findings resonate with empirical studies such as Lup et al. (2015) and Fardouly et al. (2018), which demonstrate that exposure to idealized content on platforms like Instagram correlates with lower self-esteem and higher social comparison. While some users expressed awareness of these manipulations and adopted resistance strategies, others internalized the algorithmic logic as natural and unavoidable.
Given these effects, this study proposes a multi-pronged strategy to address SSA. However, policy responses must move beyond vague calls for transparency. Drawing from regulatory literature (e.g., Yeung, 2018; Helberger et al., 2020), we identify three targeted interventions: (i)Algorithmic Transparency and Explainability: Social media companies must be required to disclose how content is ranked and recommended. Explainability—already a principle in emerging AI regulation such as the EU AI Act—should be operationalized through user-facing tools that allow inspection and customization of algorithmic settings. (ii) Content Diversity Mandates: Platforms should be incentivized or required to introduce diversity-enhancing mechanisms, that intentionally broaden content exposure. This includes proactive measures to reduce ideological feedback loops while respecting user preferences and consent. (iii) Critical Digital Literacy: Educational institutions and civil society actors must scale up initiatives that teach users to critically interpret algorithmically curated content. Such programs should include practical training on platform design, content moderation logics, and data privacy, enabling users to navigate digital environments with greater agency (Mihailidis & Viotty, 2017).
These interventions are not mutually exclusive; they must be embedded within a broader regulatory and ethical framework that treats algorithmic mediation as a socio-technical system rather than a neutral process. Addressing SSA thus requires sustained collaboration among platform designers, regulators, educators, and users themselves.
In sum, this study advances the concept of Synthetic Social Alienation as a critical lens for understanding the affective, discursive, and systemic dimensions of digital life. It calls for urgent theoretical and policy attention to reconfigure the digital public sphere in ways that restore communicative justice, emotional resilience, and epistemic diversity.

Author Contributions

Conceptualization, A.S. and H.G.; Methodology, A.S. and H.G.; Software, G.D.; Validation, G.D.; Formal analysis, A.S. and H.G.; Investigation, A.S. and H.G.; Writing—original draft, A.S. and H.G.; Writing—review & editing, A.S., H.G. and G.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Scientific Council of University Research and Creation of Istanbul Aydın University with the approval number 25-41 on 23 May 2025.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. Due to ethical concerns and in order to protect the anonymity of the interviewees, interview recordings, transcripts, and field notes cannot be shared with other researchers.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Andrejevic, M. (2019). Automated media. Routledge. [Google Scholar]
  2. Baumeister, R. F., & Leary, M. R. (1995). The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychological Bulletin, 117(3), 497–529. [Google Scholar] [CrossRef]
  3. Berry, D. M. (2014). Critical theory and the digital. Bloomsbury Publishing. [Google Scholar]
  4. Bok, S. K. (2023). Enhancing user experience in E-commerce through personalization algorithms. Available online: https://www.theseus.fi/bitstream/handle/10024/815645/Bok_Sun%20Khi.pdf?sequence=2&isAllowed=y (accessed on 8 August 2025).
  5. Bonini, T., & Treré, E. (2025). Furthering the agenda of algorithmic resistance: Integrating gender and decolonial perspectives. Dialogues on Digital Society, 1(1), 121–125. [Google Scholar] [CrossRef]
  6. Bruns, A. (2017, September 14–15). Echo chamber? What echo chamber? Reviewing the evidence. 6th Biennial Future of Journalism Conference (FOJ17), Cardiff, UK. [Google Scholar]
  7. Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4), 14261. [Google Scholar] [CrossRef]
  8. Bucher, T. (2018). If… then: Algorithmic power and politics. Oxford University Press. [Google Scholar]
  9. Buhmann, A., Paßmann, J., & Fieseler, C. (2020). Managing algorithmic accountability: Balancing reputational concerns, engagement strategies, and the potential of rational discourse. Journal of Business Ethics, 163(2), 265–280. [Google Scholar] [CrossRef]
  10. Chavanayarn, S. (2024). Epistemic injustice and ideal social media: Enhancing X for inclusive global engagement. Topoi, 43(5), 1355–1368. [Google Scholar] [CrossRef]
  11. Costa Netto, Y., & Maçada, A. C. G. (2019, June 8–14). Social media filter bubbles and echo chambers influence IT identity construction. 27th European Conference on Information Systems (ECIS) (pp. 1–14), Stockholm-Uppsala, Sweden. [Google Scholar]
  12. Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press. [Google Scholar]
  13. Dean, J. (2019). Communicative capitalism and revolutionary form. Millennium, 47(3), 326–340. [Google Scholar] [CrossRef]
  14. Fardouly, J., Willburger, B. K., & Vartanian, L. R. (2018). Instagram use and young women’s body image concerns and self-objectification: Testing mediational pathways. New Media & Society, 20(4), 1380–1395. [Google Scholar]
  15. Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7(2), 117–140. [Google Scholar] [CrossRef]
  16. Fisher, E., & Mehozay, Y. (2019). How algorithms see their audience: Media epistemes and the changing conception of the individual. Media, Culture & Society, 41(8), 1176–1191. [Google Scholar] [CrossRef]
  17. Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly, 80(S1), 298–320. [Google Scholar] [CrossRef]
  18. Foucault, M. (1980). Power/knowledge: Selected interviews and other writings 1972–1977. Pantheon. [Google Scholar]
  19. Fuchs, C. (2014). Social media: A critical introduction. Sage Publications. [Google Scholar]
  20. Gillespie, T. (2014). The relevance of algorithms. In P. J. Boczkowski, K. A. Foot, & T. Gillespie (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–193). MIT Press. [Google Scholar]
  21. Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59–82. [Google Scholar] [CrossRef]
  22. Gürkan, H., Serttaş, A., & Sarıkaya, T. (2024). The virtual mask: The dark underbelly of digital anonymity and gender identity construction in Turkey. Journal of Arab & Muslim Media Research, 17(1), 47–65. [Google Scholar] [CrossRef]
  23. Helberger, N., Huh, J., Milne, G., Strycharz, J., & Sundaram, H. (2020). Macro and exogenous factors in computational advertising: Key issues and new research directions. Journal of Advertising, 49(4), 377–393. [Google Scholar] [CrossRef]
  24. Kitchin, R. (2021). The Data Revolution: A critical analysis of big data, open data and data infrastructures. Sage Publication. [Google Scholar]
  25. Klinger, U., & Svensson, J. (2018). The end of media logics? On algorithms and agency. New Media & Society, 20(12), 4653–4670. [Google Scholar] [CrossRef]
  26. Kossowska, M., Kłodkowski, P., & Siewierska-Chmaj, A. (2023). Internet-based micro-identities as a driver of societal disintegration. Humanities and Social Sciences Communications, 10, 955. [Google Scholar] [CrossRef]
  27. Loecherbach, F., Moeller, J., Trilling, D., & van Atteveldt, W. (2020). The unified framework of media diversity: A systematic literature review. Digital Journalism, 8(5), 605–642. [Google Scholar] [CrossRef]
  28. Lup, K., Trub, L., & Rosenthal, L. (2015). Instagram# instasad?: Exploring associations among instagram use, depressive symptoms, negative social comparison, and strangers followed. Cyberpsychology, Behavior, and Social Networking, 18(5), 247–252. [Google Scholar] [CrossRef]
  29. Magalhães, J. C. (2018). Do algorithms shape character? Considering algorithmic ethical subjectivation. Social Media + Society, 4(2), 2056305118768301. [Google Scholar] [CrossRef]
  30. Malterud, K., Siersma, V. D., & Guassora, A. D. (2016). Sample size in qualitative interview studies: Guided by information power. Qualitative Health Research, 26(13), 1753–1760. [Google Scholar] [CrossRef]
  31. Marx, K. (1978). The economic and philosophic manuscripts of 1844. In R. C. Tucker (Ed.), The marx-engels reader (2nd ed., pp. 66–125). W.W. Norton & Company. (Original work published 1844). [Google Scholar]
  32. Mihailidis, P., & Viotty, S. (2017). Spreadable spectacle in digital culture: Civic expression, fake news, and the role of media literacies in “post-fact” society. American Behavioral Scientist, 61(4), 441–454. [Google Scholar] [CrossRef]
  33. Milan, S. (2015). When algorithms shape collective action: Social media and the dynamics of cloud protesting. Social Media + Society, 1(2), 2056305115622481. [Google Scholar] [CrossRef]
  34. Mosco, V. (2016). The political economy of communication (3rd ed.). Sage Publications. [Google Scholar]
  35. Möller, J., Trilling, D., Helberger, N., & Van Es, B. (2020). Do not blame it on the algorithm: An empirical assessment of multiple recommender systems and their impact on content diversity. In Digital media, political polarization and challenges to democracy (pp. 45–63). Routledge. [Google Scholar]
  36. Nguyen, C. T. (2020). Echo chambers and epistemic bubbles. Episteme, 17(2), 141–161. [Google Scholar] [CrossRef]
  37. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. In Algorithms of oppression. New York University Press. [Google Scholar]
  38. Pariser, E. (2011). The filter bubble: What is the internet hiding from you? Penguin. [Google Scholar]
  39. Perez Vallejos, E., Dowthwaite, L., Creswich, H., Portillo, V., Koene, A., Jirotka, M., & McAuley, D. (2021). The impact of algorithmic decision-making processes on young people’s well-being. Health Informatics Journal, 27(1), 1–21. [Google Scholar] [CrossRef] [PubMed]
  40. Poloni, M. (2024). The erosion of the middle class in the age of information: Navigating post-capitalist paradigms of power. Universitat Autònoma de Barcelona. [Google Scholar]
  41. Rehman, S., Ullah, S., & Tahir, P. (2024). The intersectıon of language, power, and Aı: A discourse analytical approach to social media algorithms. Sociology & Cultural Research Review, 2(4), 277–291. [Google Scholar]
  42. Reviglio, U. (2020). Personalization in social media: Challenges and opportunities for democratic societies. In Polarization, shifting borders and liquid governance. Springer. [Google Scholar]
  43. Ross Arguedas, A., Robertson, C., Fletcher, R., & Nielsen, R. (2022). Echo chambers, filter bubbles, and polarization: A literature review. Reuters Institute for the Study of Journalism. [Google Scholar]
  44. Sanseverino, G. G. (2023). Politics and ethics of user generated content: A cross-national investigation of engagement and participation in the online news ecosystem in 80 news sites [Ph.D. dissertation, Université Paul Sabatier-Toulouse III]. [Google Scholar]
  45. Saurwein, F., & Spencer-Smith, C. (2021). Automated trouble: The role of algorithmic selection in harms on social media platforms. Media and Communication, 9(4), 222–233. [Google Scholar] [CrossRef]
  46. Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 1–12. [Google Scholar] [CrossRef]
  47. Seuren, A. J. (2024). Bypassing algorithms, reinforcing stereotypes: Social media experiences of female creators [Ph.D. dissertation, Murdoch University]. [Google Scholar]
  48. Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. [Google Scholar] [CrossRef]
  49. Stark, B., Stegmann, D., Magin, M., & Jürgens, P. (2020). Are algorithms a threat to democracy? The rise of intermediaries: A challenge for public discourse. Algorithm watch, 26. Available online: https://algorithmwatch.org/en/wp-content/uploads/2020/05/Governing-Platforms-communications-study-Stark-May-2020-AlgorithmWatch.pdf (accessed on 5 August 2025).
  50. Sunstein, C. R. (2001). Republic.com. Harvard Journal of Law & Technology, 14(3), 753–766. [Google Scholar]
  51. Taylor, A. S. (2022). Authenticity as performativity on social media. Palgrave Macmillan. [Google Scholar]
  52. Terranova, T. (2000). Free labor: Producing culture for the digital economy. Social Text, 63(18), 33–58. [Google Scholar] [CrossRef]
  53. Turkle, S. (2015). Reclaiming conversation: The power of talk in a digital age. Penguin. [Google Scholar]
  54. van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford University Press. [Google Scholar]
  55. Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523. [Google Scholar]
  56. Zimmer, F., Scheibe, K., & Stock, W. G. (2019, January 3–5). Echo chambers and filter bubbles of fake news in social media: Man-made or produced by algorithms? Hawaii University International Conferences on Arts, Humanities, Social Sciences & Education, Honolulu, HI, USA. [Google Scholar]
  57. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books. [Google Scholar]
Figure 1. SSA Conceptual Framework.
Figure 1. SSA Conceptual Framework.
Journalmedia 06 00149 g001
Figure 2. Model Performance Comparison.
Figure 2. Model Performance Comparison.
Journalmedia 06 00149 g002
Figure 3. Classification Performance According to SSA Expression Types.
Figure 3. Classification Performance According to SSA Expression Types.
Journalmedia 06 00149 g003
Table 1. The Interview Questions for the Participants.
Table 1. The Interview Questions for the Participants.
What motivates you to use social media? (e.g., entertainment, news, work, connection)
How familiar are you with algorithms curating content on social media platforms?
Do you notice patterns in the kind of content recommended to you? Can you give examples?
How do you feel about the personalization of content by these algorithms?
Do you think the content you see on social media reflects a wide range of perspectives? Why or why not?
How often do you encounter content or opinions that challenge your beliefs?
How does the content you see on social media affect your perception of others outside your immediate circles?
Do you think social media algorithms make connecting with people from different backgrounds or beliefs easier or harder?
Have you ever tried to bypass or limit algorithmic recommendations? How did that affect your experience?
Table 2. Themes and Observations on Social Media Use and Algorithmic Influence.
Table 2. Themes and Observations on Social Media Use and Algorithmic Influence.
ThemeCommon ObservationsComments from Interviews
Motivations for Social Media UseEntertainment, work, and connection are primary drivers, varying by age and occupation.Profile 6: “I use TikTok for entertainment.”; Profile 9: “I use Instagram to promote causes.”
Awareness of AlgorithmsMost users are somewhat aware of algorithms but differ in the depth of understanding.Profile 8: “I am very aware of algorithms and how they shape what I see.” Profile 7: “I do not understand how it works.
Patterns in Recommended ContentUsers observe repetitive content loops, often aligned with their preferences.Profile 4: “I see much political content that aligns with my views.”; Profile 3: “I get recommended gaming videos all the time.”
Feelings About PersonalizationMixed feelings: Some appreciate convenience, others find it intrusive.Profile 6: “I find the recommendations helpful.”; Profile 9: “It is manipulative; it makes me see only what I like.
Exposure to Diverse PerspectivesEcho chambers limit exposure to differing viewpoints.Profile 7: “All I see is content that confirms what I already know.”; Profile 9: “It is hard to reach new audiences.
Encountering Challenging ContentRare encounters with content that challenges beliefs, especially in younger users.Profile 3: “I never see content that challenges my views.”; Profile 9: “Sometimes I face polarized discussions.”
Effects on Perceptions of OthersLimited or skewed portrayals in content shape perceptions.Profile 10: “I feel disconnected from others.”; Profile 8: “My feed shapes my view of people in my field.
Connections Across DifferencesDifficulty connecting with diverse groups due to algorithmic filtering.Profile 7: “I mostly see content from people like me.” Profile 4: “It is hard to engage with differing perspectives.”
Attempts to Bypass AlgorithmsAttempts include using manual search or following diverse accounts, with mixed success.Profile 3: “I follow accounts manually to get different content.”; Profile 10: “I try to use the platform’s tools, but they do not work well.”
Table 3. Discursive Strategies of Social Media Users.
Table 3. Discursive Strategies of Social Media Users.
User TypeDiscourse PatternsKey Linguistic MarkersImplications
Passive Consumers (Profiles 1, 2, 3, 5, 8)Repetitive, deterministic“I always see the same,” “It’s an endless loop”Low agency, algorithmic dependence
Active Curators (Profiles 4, 6, 7, 9, 10)Adaptive, strategic“I try to manipulate it,” “I avoid certain content”Algorithmic literacy, resistance discourse
Algorithm-Dependent Users (Professionals, Activists)Paradoxical, negotiated“I need it but hate it,” “I optimize for reach”Tension between reliance and critique
Table 4. SSA in users’ experiences.
Table 4. SSA in users’ experiences.
ThemeCommon ObservationsComments from Interviews
Understanding of SSAMost users know how algorithms create a sense of detachment or alienation, but the depth of their understanding varies.Profile 1: “It feels like I am constantly being fed the same stuff, and the platform does not care about what I need.”
Perceptions of Social InteractionUsers experience a sense of isolation or detachment from genuine human connections due to algorithmic filtering and content curation.Profile 6: “Even though I interact with people online, I feel like I am not truly connecting with them.” Profile 7: “It is like we are all in echo chambers, and it does not feel real.”
Impact on Emotional Well-beingSSA is often linked to feelings of frustration, dissatisfaction, and emotional disengagement with the content they consume.Profile 8: “I sometimes feel emotionally drained from the repetitive content that does not resonate with me.”; Profile 3: “The endless gaming videos make me feel stuck.
Alienation from Diverse PerspectivesThe algorithmic environment limits exposure to differing perspectives, reinforcing feelings of detachment from broader social conversations.Profile 9: “I do not see much of the other side of issues, which makes me feel disconnected from people with different views.”
Attempts to Counter SSASome users actively try to break free from SSA by seeking more diverse content, though success varies due to algorithmic filtering.Profile 2: “I manually search for new topics to get out of the bubble, but it does not always work.”; Profile 10: “I follow accounts from different perspectives, but it does not make much of a difference.
Social Engagement in Digital SpacesSSA is often linked to superficial or transactional interactions in digital spaces rather than meaningful, in-depth connections.Profile 4: “I am engaging with the same type of people, but it feels more like networking than true connection.”; Profile 5: “There is only so much I can gain from these platforms before they start feeling empty.
Table 5. Strategies for Managing Algorithmic Content.
Table 5. Strategies for Managing Algorithmic Content.
ThemeCommon ObservationsComments from Interviews
Active Content CurationSome users actively engage with algorithms by curating their feeds, choosing specific accounts to follow, or using search features.Profile 1: “I try to follow various accounts to diversify my feed.
Manual Content SearchUsers rely on manual searches and browsing to seek content outside algorithmic suggestions.Profile 6: “I search for things manually to find new content that the algorithm does not suggest.
Engagement with Diverse AccountsSome users follow a range of diverse accounts or topics to counter algorithmic homogeneity.Profile 9: “I follow accounts that challenge my views to broaden my perspective.”
Frequent Unfollowing/MutingSome users unfollow or mute certain accounts to control their content to prevent overexposure to repetitive content.Profile 2: “I mute accounts that keep pushing the same content I do not find interesting.
Limiting Time on PlatformsA strategy for coping with content overload and algorithmic influence is to reduce overall platform usage.Profile 10: “I limit my time on social media so I do not get caught in these loops.”
Seeking Alternative PlatformsSome users attempt to move to other platforms with less algorithmic control or a different type of content structure.Profile 5: “I have started using a new platform where I can curate my content more freely.
Awareness and Avoidance of BiasSome users consciously avoid content that reinforces biases by actively seeking diverse opinions or questioning algorithmic suggestions.Profile 4: “I question whether the content I see is biased, especially in politics.”
Engaging with Algorithmic FeedbackA few users try to alter algorithmic suggestions through likes, shares, and other feedback loops to create more personalized content.Profile 8: “I like videos that are more diverse to try to influence my recommendations.”
Table 6. The Framework of Synthetic Social Alienation (SSA).
Table 6. The Framework of Synthetic Social Alienation (SSA).
ConceptDescriptionKey Implications
Synthetic Social Alienation (SSA)The phenomenon where algorithm-driven environments reshape social interactions and identities, leading to detachment from real-world connections.Alienation in digital spaces, cognitive narrowing, and fragmented social bonds.
Simulated RelationshipsUsers engage with algorithmically curated content and personas, prioritizing engagement over authenticity.Weakens genuine human connections, fosters parasocial relationships.
Commodification of EngagementSocial interactions (likes, shares, comments) are converted into data and exploited for profit.Reduces relationships to transactional engagements, reinforcing corporate control.
Erosion of RealnessUsers create and maintain digital personas that prioritize visibility over authenticity, shaped by algorithmic incentives.Disconnection from true self, loss of diverse perspectives.
Algorithmic Detachment Syndrome (ADS)A form of digital alienation where users experience cognitive and social estrangement due to constant algorithmic mediation.Reinforces echo chambers, reduces exposure to new ideas.
Surveillance CapitalismPlatforms extract behavioral data, manipulate user behavior, and reinforce engagement loops for monetization.Loss of autonomy, increased corporate influence over thought and behavior.
Discursive FragmentationAlgorithmic curation prioritizes content that maximizes engagement, leading to ideological echo chambers.Limits intellectual diversity and increases polarization.
Lexical and Rhetorical ShiftsSensationalized and emotionally charged content dominates discourse, simplifying discussions.Undermines critical thinking, encourages reactionary communication.
Algorithmic Visibility and Speech EconomyAlgorithms determine whose voices are amplified and whose are suppressed, influencing public discourse.Concentrates power in digital platforms, marginalizes counter-narratives.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Serttaş, A.; Gürkan, H.; Dere, G. Synthetic Social Alienation: The Role of Algorithm-Driven Content in Shaping Digital Discourse and User Perspectives. Journal. Media 2025, 6, 149. https://doi.org/10.3390/journalmedia6030149

AMA Style

Serttaş A, Gürkan H, Dere G. Synthetic Social Alienation: The Role of Algorithm-Driven Content in Shaping Digital Discourse and User Perspectives. Journalism and Media. 2025; 6(3):149. https://doi.org/10.3390/journalmedia6030149

Chicago/Turabian Style

Serttaş, Aybike, Hasan Gürkan, and Gülçicek Dere. 2025. "Synthetic Social Alienation: The Role of Algorithm-Driven Content in Shaping Digital Discourse and User Perspectives" Journalism and Media 6, no. 3: 149. https://doi.org/10.3390/journalmedia6030149

APA Style

Serttaş, A., Gürkan, H., & Dere, G. (2025). Synthetic Social Alienation: The Role of Algorithm-Driven Content in Shaping Digital Discourse and User Perspectives. Journalism and Media, 6(3), 149. https://doi.org/10.3390/journalmedia6030149

Article Metrics

Back to TopTop