Next Article in Journal
The Social Construction of Age: Media Stigmatization of Older Adults: A Systematic Review
Previous Article in Journal
Manufacturing Legitimacy: Media Ownership and the Framing of the July 2024 Uprising in Bangladesh
 
 
Article
Peer-Review Record

Synthetic Social Alienation: The Role of Algorithm-Driven Content in Shaping Digital Discourse and User Perspectives

Journal. Media 2025, 6(3), 149; https://doi.org/10.3390/journalmedia6030149
by Aybike Serttaş 1,*, Hasan Gürkan 2 and Gülçicek Dere 3
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Journal. Media 2025, 6(3), 149; https://doi.org/10.3390/journalmedia6030149
Submission received: 2 July 2025 / Revised: 2 September 2025 / Accepted: 5 September 2025 / Published: 10 September 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Although the term "Synthetic Social Alienation" is evocative, it overlaps with established notions such as filter bubbles and echo chambers. A section that explicitly distinguishes SSA from these earlier constructs, perhaps by stressing its affective and structural dimensions—would sharpen the theoretical contribution. Relatedly, you list four research questions but do not specify whether they are exploratory or accompanied by hypotheses. A brief statement indicating the exploratory nature of the work or, alternatively, recasting the questions as testable propositions would resolve that ambiguity.

Your qualitative component rests on ten semi‑structured interviews, a number that can be perfectly adequate; nevertheless, it would be helpful explaining how saturation or "information power" was achieved. Providing basic demographics (age range, gender, primary platform) will further contextualise the sample. In the computational portion, please report the size of the social‑media corpus, the exact scraping window and the inclusion criteria. These details are crucial for replication and will align the study with current transparency norms.

The sentiment‑analysis section would also benefit from a more robust evaluation. Testing the model on only five sentences does not provide reliable performance estimates. I recommend replacing the toy test with five‑fold (or greater) cross‑validation on a larger held‑out set and presenting the resulting confusion matrix. While you correctly attempt to address class imbalance with SMOTE, synthetic oversampling can over‑fit very small datasets and you should cite a recent methodological source to that effect.

A few rhetorical adjustments will strengthen the presentation. Where you claim that algorithms “create” SSA, consider substituting “may contribute to” so that the causal language matches your cross‑sectional evidence. Likewise, the abstract currently describes the sentiment model as highly successful, yet the ROC‑AUC is about 0.67; rephrasing this as “modest” or “moderate” will keep expectations realistic. I also encourage you to discuss an interesting tension in your findings: if SSA‑related keywords are rare in the corpus, what does that imply for the pervasiveness of the phenomenon you describe?

The ethics statement would be clearer if you specify the IRB approval code and explain your consent procedure. Although you cannot share full transcripts, depositing redacted excerpts and analytical code on an open repository such as OSF, or providing a concise justification for non‑sharing, will satisfy emerging FAIR‑data expectations.

Aside from these points, only minor editorial polishing is needed - for example, separating theoretical definitions from empirical results in your current table and correcting a handful of idiomatic slips (e.g., “deterred from speaking” rather than “deterred to speak”).

Author Response

We sincerely thank Reviewer for the detailed and constructive feedback, which has significantly improved the quality and clarity of our manuscript. Below, we address each comment point-by-point and summarize the corresponding revisions made in the manuscript.

 

We have revised the theoretical framework to explicitly distinguish SSA from related concepts such as filter bubbles and echo chambers. A new paragraph elaborates on how SSA encompasses affective disconnection, algorithmic structuring of communication, and attention commodification, extending beyond ideological reinforcement alone.

 

We have added a statement clarifying that the study adopts an exploratory design, given the emergent and interdisciplinary nature of the SSA construct. This is now stated at the end of the research questions section.

 

We have expanded the methodology section to explain our sampling strategy. Demographic details (age range, gender, and primary social media platform) have also been added.

We have updated the methods section to specify the corpus details:

  • Size: 180 examples (90 original, 90 synthetic)
  • Scraping window: January–March 2024
  • Inclusion criteria: Public comments on posts containing digital alienation-related keywords.

These additions align our study with transparency and reproducibility norms.

We now describe model performance as “moderate” or “promising,” aligning language with reported metrics.

We added a paragraph in the Discussion addressing this tension. We interpret the rarity as reflective of both the subtle nature of alienation and users’ limited vocabulary to describe such experiences—underscoring the need for new conceptual tools like SSA.

Once again, we thank the reviewer for the invaluable feedback. All comments have been carefully addressed to improve the clarity, rigor, and transparency of our study.

Reviewer 2 Report

Comments and Suggestions for Authors

First of all, thanks very much to the author for the insights this paper suggests. However, I find it appropriate to recommend, if I may, some additions:

In general, the proposal is compelling but it runs the risk of remaining a self-evident insight. To improve this, it would beneficial to insert in the theoretical framework  Dean's concept of communicative capitalism, which looks precisely at the transformative and exploitative dynamics incurred on the communicative level as a result of the pervasiveness of the capitalist model. Moreover, considering lines 43-52, the elaboration on the difference between filter bubbles and echo chambers does not clarify the phenomena in terms of definition but only with respect to the effects they entail. In this regard, I would suggest integrating Bruns' work that seeks to demonstrate the factionalism of echo chambers as a phenomenon of triggering polarization in society. Similarly speaking of the denied agency of subjects I would integrate the recent perspective of “algorithm resistance” by Bonini and Treré, which can help reinforce the underlying thesis namely the emergence of alienation.  Still, I find the logical leap from the individual to spillovers to the community problematic (164-168) as these turn out to be an assumed assumption and not supported by scientific evidence from studies that have attempted to go in that direction (e.g., Kossowka et Coll.).

 

In the methodology section, the sentence at line 170 “The study addresses alienation as a systemic consequence of external forces shaping human interactions.” is controversial because it implies a communicative and technological determinism that should be problematized more. If the external forces implied are algorithmic ones, it is appropriate to relativize the probing of the impact, but in general just the impact, they may have in the totality of human interactions. Regarding the methodological framework, I would reformulate the questions into two main questions and two in-depth sub-questions to allow aspects of alienation to emerge. However, it would be necessary to explain the ethical approach taken, although adherents on a voluntary basis what attention was paid to the concept of diversity and how the participants were selected, over what period of time the interviews were conducted and how long the interviews lasted on average (e.g., 1.5h), and most importantly I would ask for an explanation of why they were interrupted at the rather small number of 10 interviewees. Regarding the methodological framework, I would reformulate the questions into two main questions and two in-depth sub-questions to allow aspects of alienation to emerge. It would be necessary, however, to explain the ethical approach taken, although adherents on a voluntary basis what attention to the concept of diversity was paid and how the participants were selected, in what period the interviews were conducted and how long the interviews lasted on average (e.g., 1.5h), and most importantly I would ask for an explanation of why they were interrupted at the rather small number of 10 interviewees. As for the outline of the interviews I would turn it into an annex, but make explicit the references to the literature you were inspired by how the risks of bias (e.g. misunderstanding or social desirability during the interviews especially with broad and vague questions such as: How does the content you see on social media affect your perception of others outside your immediate circles?). The idea of operationalizing the reading of interviewees' words seems to me an interesting methodological anchor and denotes considerable effort. However, I would like the procedure followed to be better explained in the method section rather than in the results (where, for example, 'SSA phrase' appears, which if I'm not mistaken is a synthesis performed by the authors that remains obscure to the reader). In line with this, it seems to me that the results lack a more organic and informed discussion of literature such as that of critical algorithm studies. Furthermore, the study's limitations should be better highlighted and the final model should be reconsidered as a graphic structure different from the current table format, one that represents the relationships between the identified aspects (which currently do not seem mutually exclusive).

 

References

Dean, J. (2019). Communicative Capitalism and Revolutionary Form. Millennium, 47(3), 326-340. https://doi.org/10.1177/0305829819840624

Bruns, Axel (2017) Echo chamber? What echo chamber? Reviewing the evidence. In 6th Biennial Future of Journalism Conference (FOJ17), 2017-09-14 - 2017-09-15. 

Kossowska, M., Kłodkowski, P., Siewierska-Chmaj, A. et al. Internet-based micro-identities as a driver of societal disintegration.Humanit Soc Sci Commun 10, 955 (2023). https://doi.org/10.1057/s41599-023-02441-z

Bonini, T., & Treré, E. (2025). Furthering the agenda of algorithmic resistance: Integrating gender and decolonial perspectives. Dialogues on Digital Society, 1(1), 121-125. https://doi.org/10.1177/29768640241312114

Author Response

We would like to express our sincere gratitude to the reviewer for the insightful and constructive comments, which have substantially contributed to improving the clarity, depth, and academic rigor of our manuscript. Below, we detail the revisions made in response to each comment.

We appreciate this valuable observation. In the revised version, we have significantly expanded the theoretical framework to include Jodi Dean’s concept of communicative capitalism, which contextualizes the affective and exploitative dynamics of digital platforms. Furthermore, we clarified the conceptual distinction between filter bubbles and echo chambers, and integrated Axel Bruns’ (2017) empirical critique of echo chambers as factionalized clusters fostering polarization. We also introduced Bonini and Treré’s (2022) work on algorithmic resistance to nuance the discussion around user agency and alienation. Lastly, we addressed the issue of communal spillover effects with references to recent empirical research (Kossowska et al.) and revised the phrasing to avoid overgeneralization and unsubstantiated claims.

We thank the reviewer for pointing the methodological part out. We have rephrased the sentence to avoid deterministic framing, now highlighting that algorithmic systems are among several external forces contributing to alienation within specific media ecologies. The new phrasing reflects a more relational and contextual understanding, aligning with a non-deterministic perspective. Moreover, we have revised the research questions accordingly that can be seen in the manuscript. This structure enables greater focus and allows alienation to emerge more organically in analysis. We revised the Methods section to include a detailed account of how SSA was identified through thematic coding and phrase clustering. The term “SSA phrase” is now clearly defined as a synthesis informed by user narratives and discursive features indicative of alienation. We have removed any ambiguities from the Results section and clarified our analytical process upfront.

We now provide a detailed participant profile table and narrative, outlining age, gender, and primary platforms. Interviews were conducted through May 2025, lasted on average 45–60 minutes, and were approved by Ethics Committee. Participation was voluntary, with informed consent obtained digitally and verbally. The sample size of 10 is justified through the principle of information power (Malterud et al., 2016), as data saturation was reached during thematic coding. It was added into the manuscript, as you may see.

We have now embedded a more explicit and literature-informed discussion of our findings in relation to critical algorithm studies (e.g., Noble, 2018; Gillespie, 2018). Each analytical theme is now framed with relevant theoretical work to avoid descriptive overreach and better anchor claims in the literature.

We have significantly expanded the conclusion, grounding each recommendation (transparency, regulation, digital literacy) in relevant empirical and regulatory literature (e.g., European Commission’s Digital Services Act, Gillespie’s platform governance work). The final section now balances normative implications with grounded analysis and is better aligned with the overall argument.

Once again, we are very grateful to the reviewer for the depth and precision of the feedback. These revisions have greatly strengthened the manuscript.

 

Reviewer 3 Report

Comments and Suggestions for Authors

The paper offers valuable insights into users' perceptions of algorithmic influence on their digital experiences and proposes a new term, Synthetic Social Alienation (SSA), which aims to extend Marx's alienation theory to contemporary digital spaces. It highlights important issues such as the commodification of attention, the reinforcement of biases, and the fragmentation of discourse. However, the methodology, particularly the very small sample size for both qualitative interviews and the sentiment analysis, raises serious concerns about the generalizability and robustness of the findings.

The literature review is relevant, but often repetitive and unidirectional. In the early pages, the same core points are reiterated multiple times without engaging sufficiently with the complexity and divergence in the literature. Filter bubbles and echo chambers, for instance, are presented as established and necessarily negative effects. The author(s) state: “Besides this, echo chambers and filter bubbles limit the diversity of perspectives encountered by users.” However, recent empirical research suggests that this is not always the case. (see e.g. Möller, J., Trilling, D., Helberger, N., & Van Es, B. (2020). Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity. In Digital media, political polarization and challenges to democracy (pp. 45-63). Routledge.) Echo chambers and filter bubbles may also increase exposure to diverse viewpoints depending on multiple factors such as user behavior, platform design, and algorithmic implementation. Moreover, the author(s) could better distinguish the mechanisms at play: filter bubbles occur without the user's awareness, while echo chambers often result from natural tendencies like homophily bias. A more precise engagement with the literature would help refine these distinctions.

The paper often cites outdated or insufficiently empirical sources. For example, the claim that “When Pariser (2011) argues that such systems create a ‘personalized universe of information’ that filters out opposing views, contributing to ideological segregation” relies on anecdotal evidence from over almost 15 years ago. Pariser’s analysis was insightful at the time, but it emerged in a very different digital context. Since then, hundreds of empirical studies have shown more nuanced dynamics, where exposure to opposing views can increase, decrease, or polarize depending on context. Likewise, the citation of Terranova (2000) to support the idea that “shares, comments, and clicks—becomes a valuable commodity that platforms monetize” is questionable, as the current logic of surveillance capitalism had not yet emerged in 2000. More recent scholarship would significantly strengthen these claims. Similarly, the statement that “Algorithms prioritize content that maximizes engagement, amplifying emotionally charged or sensationalist content, which can lead to a distorted sense of social reality (van Dijck, 2013)” needs to be backed up with newer, more data-driven studies.

Conceptually, important terms remain vague. The sentence “Policymakers must regulate platforms to ensure that algorithms prioritize diverse content exposure over profit-driven engagement” assumes a clear-cut opposition between diversity and profitability that is not always valid. In some cases, diverse content can drive engagement. The term "diverse content" itself needs clarification: what does it mean, and how can it be operationalized? Relevant literature has not been cited (e.g. Loecherbach, F., Moeller, J., Trilling, D., & van Atteveldt, W. (2020). The unified framework of media diversity: A systematic literature review. Digital Journalism, 8(5), 605-642.). Similarly, the phrase “Nguyen (2020) highlights how algorithms promote epistemic bubbles by unintentionally omitting alternative viewpoints” raises the question: what counts as an alternative viewpoint? Would personalization algorithms be expected to expose users to all possible positions on every issue? Would users have the time or capacity to process such exposure? A clearer conceptual framework is needed to unpack these tensions.

The article employs established metaphors without engaging with the literature that already developed them. When the authors refer to “trapping algorithms as traps,” they echo a metaphor explored very effectively by Seaver (2019) in his article Captivating algorithms: Recommender systems as traps. That source should be cited and discussed.

In terms of originality, the notion of Synthetic Social Alienation is potentially interesting but insufficiently developed. The paper lacks clear definitions of SSA, digital alienation, or even alienation more generally. These should be clarified and related more directly to existing literature in digital sociology or media studies.

From a methodological standpoint, the weaknesses are substantial. The qualitative analysis is based on only ten interviews. It is unclear whether saturation was reached, or how participants were selected. Moreover, there is a striking discrepancy between the qualitative and quantitative findings. In the interview data, users describe feeling “trapped,” “anxious,” experiencing “helplessness,” and encountering “polarized language.” Yet the sentiment analysis reports an “average sentiment score... around 0.087 and neutral” and states that “SSA and emotional expressions are vague, not direct.” This contradiction is critical and cannot be overlooked. The authors must reconcile these findings by either questioning the validity of the sentiment analysis or interpreting the qualitative data with more caution. 

The research questions are also too broad and compound multiple inquiries. RQ1 asks: “How do social media algorithms influence users’ sense of agency, belonging, and intellectual engagement?” These are three separate phenomena, each deserving its own question. RQ2 similarly conflates linguistic and discursive patterns. RQ3 merges digital alienation with professional and activist discourse. The current formulation lacks focus and prevents meaningful answers within the scope of the article.

The conclusion introduces recommendations to mitigate the effects identified, but these remain underdeveloped. The article states that “users often measure their own lives against the curated realities presented online, resulting in decreased self-esteem and heightened social anxiety.” This is an important point, but it is not supported by specific empirical evidence. Why not cite studies on platforms like Instagram that have investigated this effect in depth? In general, policy recommendations should follow a robust analysis. As it stands, the article lacks sufficient engagement with regulatory literature and provides no substantial discussion of intervention strategies. This section should either be expanded significantly or removed.

Overall, the article addresses a very timely and important issue. The qualitative exploration of Synthetic Social Alienation provides promising insights. However, the methodological limitations, especially the extremely limited sample sizes in both the qualitative and quantitative parts, severely undermine the robustness and credibility of the findings. 

The authors are encouraged to revise the manuscript thoroughly. These are not minor edits but rather require a re-evaluation of the research design, the collection of more data, clarification of conceptual terms, and significant restructuring of the argument. The current version attempts too much at once (literature review, theory-building, empirical study, and even policy proposals in the conclusions) without adequate development of any of these components, which ultimately weakens the coherence and impact of the article.

Author Response

We would like to sincerely thank Reviewer for their thoughtful, critical, and constructive feedback. Each point raised was instrumental in helping us revise and improve the clarity, rigor, and theoretical depth of our manuscript. Below we provide a detailed, point-by-point response to all comments, indicating how we have addressed the issues raised.

We have significantly revised the theoretical framework to clearly distinguish Synthetic Social Alienation (SSA) from established concepts such as filter bubbles, echo chambers, and epistemic bubbles. SSA is now presented as a multidimensional concept with affective, structural, and discursive dimensions, which extend beyond ideological isolation. We have added a dedicated subsection elaborating on how SSA differs conceptually and experientially.

The research questions have been revised to improve focus and specificity. Each question now targets a single concept: RQ1 now addresses agency only, while RQ2 focuses solely on linguistic markers of SSA. This restructuring allows for clearer analysis and interpretation.

We have thoroughly revised the literature review to eliminate repetition and incorporate recent empirical studies. We now engage with nuanced perspectives that challenge deterministic views of echo chambers and filter bubbles. Specifically, we have cited Möller et al. (2020), Loecherbach et al. (2020), and Seaver (2019), as well as recent literature on algorithmic diversity, personalization, and platform design.

We now provide clear operational definitions of these terms and contextualize them with references to relevant frameworks, including the Unified Framework of Media Diversity (Loecherbach et al., 2020) and current debates in digital epistemology. This adds precision and theoretical consistency.

We have reworked the conceptual section on SSA, now offering a comprehensive definition grounded in Marxist alienation theory, digital labor literature, and affect theory. Each dimension (algorithmic manipulation, emotional disconnection, and discursive displacement) is now separately described and anchored in the literature.

We acknowledge the limitations of our sample. To address this:

  • We clarified the use of Malterud et al.’s “information power” to justify the qualitative sample size and included participant demographics (age, gender, platform).
  • We expanded the computational dataset to include 180 labeled training samples and 30 test examples, balancing real and synthetic data.
  • The sentiment analysis now uses 5-fold cross-validation, and we report detailed model performance, including confusion matrices, SHAP and LIME explainability metrics.

We now explicitly address this tension in the Discussion section. We interpret it as evidence that alienation may be expressed subtly or indirectly, escaping simple polarity classification. This supports the development of a more context-aware sentiment model for future research.

We revised these claims for precision and removed all overly deterministic or unsupported assertions. For example, we now clarify that algorithmic designs may contribute to homogenization under specific conditions. We have also refined our discussion of “diverse content” and its relation to engagement.

We have now cited and discussed Seaver (2019) in the relevant section, acknowledging his framing of recommender systems as traps and linking this metaphor to our SSA model.

We have restructured and substantially expanded the conclusion to include empirically grounded policy implications. We now cite studies on social media and mental health (particularly related to Instagram), algorithmic transparency, and regulatory literature to support our recommendations. Where evidence is insufficient, speculative statements have been removed.

Overall, we have restructured the manuscript to improve flow and thematic focus. Specifically:

  • The theoretical, empirical, and policy discussions are now clearly separated.
  • The methodology and findings sections are more concise and aligned.
  • All claims are now better supported by evidence and recent scholarship.

We appreciate the reviewer’s careful attention and helpful suggestions. All major concerns have been addressed through extensive revision, which we believe has significantly strengthened the manuscript’s conceptual clarity, methodological transparency, and scholarly contribution.

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Dear authors, 
I recognize the great work you have done, but I would ask you to make further changes to the methodology section. 

I would recommend including the selection criteria and number of comments for the social media platforms you mention in lines 222 to 235, as this information is important for ensuring the triangulation is truly robust.  

Furthermore, in line 235, “multi-step thematic analysis process, described in detail in Section Results,” I strongly suggest explaining the procedure adopted in the methodology section rather than in the results section.

Author Response

Dear Reviewer, 

Thank you for your valuable feedback. We've added your two important suggestions in the text as you kindly supplied.  

Warm regards, 

Back to TopTop