Next Article in Journal
Offline Factors Influencing the Online Safety of Adolescents with Family Vulnerabilities
Previous Article in Journal
K-Pop and Education Migration to Korea in the Digitalised COVID-19 Era
Previous Article in Special Issue
Digital Monopolies—The Extent of Monopolization in Germany and the Implications for Media Freedom and Democracy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flood the Zone with Shit: Algorithmic Domination in the Modern Republic

Department of Political and Global Affairs, Middle Tennessee State University, Murfreesboro, TN 37132-0001, USA
Soc. Sci. 2025, 14(6), 391; https://doi.org/10.3390/socsci14060391
Submission received: 30 April 2025 / Revised: 11 June 2025 / Accepted: 12 June 2025 / Published: 19 June 2025

Abstract

:
This paper critically examines the risks to democratic institutions and practices posed by disinformation, echo chambers, and filter bubbles within contemporary social media environments. Adopting a modern republican approach and its conception of liberty as nondomination, this paper analyzes the role of algorithms, which curate and shape user experiences, in facilitating these challenges. My argument is that the proliferation of disinformation, echo chambers, and filter bubbles constitutes forms of domination that manipulate vulnerable social media users and imperil democratic ideals and institutions. To counter these risks, I argue for a three-pronged response that cultivates robust institutional and individual forms of antipower by regulating platforms to help protect users from arbitrary interference and empower them to fight back against domination.

1. Introduction

The dangers for democracy from disinformation and echo chambers have concerned theorists since the inception of social media platforms. As the reliance on social media platforms has grown, weaponized disinformation, purposely spreading false information intending to deceive others, or, in the infamous words of Steve Bannon, flooding the zone with shit, threatens to undermine existing democratic practices. Coupled with this development, there is a worry that social media networks push users into echo chambers characterized by self-reinforcing functions where like-minded individuals solidify their fixed preferences without critique. Moreover, social media platforms that shield users from opposing viewpoints can push those users to ideological extremes. For many, one of the key culprits in the proliferation of disinformation and echo chambers in an increasingly polarized social media landscape is the powerful algorithms that work in the background of applications and control many aspects of users’ experiences. Utilizing a modern republican approach, I critically explore the role algorithms play in this growing problem. I will argue that certain institutional and individual antipowers are necessary to help users inoculate themselves from domination arising from the spread of these practices that threaten democratic ideals and institutions.
The plan of this paper is straightforward. First, I briefly sketch out the modern republican conception of liberty as nondomination. I then connect this to the interferences users may experience in the digital environment. Next, I explore two related potential sources of domination: the Digital Influence Machine and algorithm-driven newsfeeds. I conclude by revisiting modern republican liberty to outline a few potential moves forward to protect and empower users to fight back against domination.

2. Modern Republican Liberty and Digital Domination

2.1. Liberty as Nondomination

At the core of modern republicanism is its distinctive conception of liberty as nondomination (Pettit 1997; Skinner 2002). Modern republicans argue that freedom does not merely consist in the absence of external interferences, nor is it equated with self-mastery. Instead, modern republican liberty as nondomination is understood as a condition where individuals are not subject to interference that is not aligned with their own arbitrium or will. Thus, individuals are free when they live as “a ‘freeman’ rather than a ‘bondsman’, a liber rather than a servus” (Pettit 2006, p. 134). For Pettit (2008b, pp. 106–8), modern liberty as nondomination is characterized by the absence of alien or alienating control on the part of others in a manner that compromises their freedom of choice. It follows, then, that individuals are free in the modern republican sense when they do not have “to live under the potentially harmful power of another” (Pettit 2012, p. 5). Consequently, modern republican liberty as nondomination seeks to minimize individuals’ vulnerabilities to arbitrary power, whether that power arises from state authority (imperium) or among the people (dominium). Put simply, for modern republicans, domination occurs when an agent or agency wields arbitrary power over another (Pettit 1996, p. 578), since such power fails to reflect the common, publicly avowable interests of those subject to it.
As an antidote to domination, Pettit describes modern republican liberty as a form of antipower that is present when democratic institutions and practices seek to eliminate dominating power hierarchies. Accompanying this form of antipower is the recognition that a constitutive relationship must exist between the principles of nondomination, the citizenry, and the normative ideals and institutions of the state. Importantly, republican technologies such as properly constituted institutions, mixed constitutions, the separation of powers, the rule of law, and distinctive republican ideals all work together to help secure liberty as nondomination (Halldenius 2010, pp. 12–13; Pettit 2006, p. 133; List 2006, p. 218). For Pettit, this points to an understanding that modern republican liberty as nondomination is resilient, one that can bend but also one that resists breaking since it is deeply ingrained in the institutions and ideals of the republic (Pettit 1997, p. 24). I will return to this issue in Section 5 but first, I want to explore the growing threat of arbitrary interference in the digital space.

2.2. Digital Domination

As mentioned above, there are two primary sources of threats to the enjoyment of liberty as nondomination. There are threats from arbitrary interference arising from the imperium, or the state. And there are threats stemming from the dominium, or the people. When thinking of how these threats materialize in today’s social media systems, we can see the emergence of what Tamar Megiddo (2020, p. 437) refers to as “digital domination.” Megiddo outlines five tactics increasingly used by states against citizens that can be seen as digital domination. First, governments have increasingly turned to the digital sphere to gather information on political actors by monitoring the social media footprints of targeted actors. China is perhaps the most well-known state for monitoring not only the physical status of its citizens, but also for their aggressive tactics to surveil and restrict the digital presence of its citizens. However, such moves are not only found in authoritarian-leaning states, but several democratic states also employ similar tactics. Take the growing network of mass surveillance programs such as CCTV and the use of License Plate Readers to monitor the movement of people in countries like the United States and United Kingdom (Megiddo 2020, pp. 404–8).
Second, some governments have also moved to restrict or block users’ access to online resources (Megiddo 2020, pp. 412–15). Countries, like Iran, Egypt, and India have periodically shut down internet service to quell the opposition (Gosztonyi 2023; Bischoff 2023). More recently, Twitter, now known as X,1 agreed to restrict Turkish users’ access to certain posts in the runup to the national elections in 2023 at the request of its President, who was engaged in a close election (Stein 2023). Other examples include moves in some countries like the United States to block users’ access to the popular app TikTok over potential security concerns (Maheshwari and Holpuch 2025). Third, some governments have also moved to disrupt communication channels by seeding disinformation into social media (Megiddo 2020, pp. 415–19). Echoing Bannon, Tufekci (2017, p. 228) argues that flooding the audience with disinformation “…produce[s] resignation, cynicism, and a sense of disempowerment among the people to dilute their attention and focus.” The final two tactics of digital domination, according to Megiddo, occur more in authoritarian-leaning states, although not so exclusively (Megiddo 2020, pp. 419–25). These tactics involve governments both policing—gathering intelligence on citizens using digital resources—and the subsequent bullying and harassment that can follow such efforts. Examples can be seen in the monitoring, harassment, and, ultimately, imprisonment of Nobel Peace Prize-winning Filipino journalist, Maria Ressa. Ressa, the editor and founder of the online news platform, Rappler, often focused on the increasing authoritarianism of the Philippine president, Rodrigo Duterte (Ratcliffe 2021). As can be discerned from this list, Megiddo’s chief concern is with the forms of domination originating from the imperium, although sometimes in concert with nefarious actors among the dominium. There is another point to highlight here before proceeding. As is inherent in Megiddo’s analysis, it appears that digital domination can vary greatly across states and political systems. In authoritarian-leaning states, the vast power of the state may well systemically entrench domination in social networks, overwhelming any nascent power present in the dominium. This is because these kinds of states lack the minimum institutional and systemic structures aimed at rooting out domination.
For this paper, however, I want to expand on Megiddo’s analysis and further explore the forms of domination that potentially originate and reside in social networks outside the state’s strict purview. As mentioned earlier, from a modern republican perspective, the pervasive use and power of algorithms in information and social media are concerning in several ways. First, from an individual’s perspective, there must be serious questions about whether algorithms can themselves be seen as a form of domination. Relatedly, another concern must be how agents can use the algorithm to manipulate and ultimately dominate other agents. Then, there are functional concerns regarding the use of algorithms to undermine the very institutions of the republic itself, namely, through the erosion of democratic norms and practices. To further explore these, my primary focus in what follows is an exploration of the broad threats to modern republican liberty and the potential damage to democracy in two main areas. The first is the growing practice of user manipulation by what is called the Digital Influence Machine. I then critically discuss the potentially dominating power of the algorithm-driven newsfeed feature often found in social media platforms such as Facebook and X.

3. Forty-Two

3.1. The Digital Influence Machine

At the end of Douglas Adams’ popular science-fiction novel The Hitchhiker’s Guide to the Galaxy, the supercomputer Deep Thought reveals that the answer to the “Great Question” of “Life, the Universe and Everything” is “forty-two.” To arrive at this answer, Deep Thought took 7.5 million years to run through a complex algorithm with millions of signals, variables, and data points. An algorithm is, in its most simple formulation, a set of instructions or a guide to solve some problem or to accomplish a task. Algorithms can be simple, like a recipe, or highly complex, like the one run by Deep Thought. Some algorithms produce something definitive after running, as in the answer forty-two, while others seek to condition behavior, nudging users this way or that. Increasingly, algorithms have taken on political characteristics and, thus, can be seen as an exercise of political power (Koopman 2022, pp. 344–45). According to Panagia, an algorithm is politically salient because it has the technical power “to predict future outcomes and to coordinate actions” such that individuals are in a certain potentially vulnerable position to be predictable and coordinated (Panagia and Köseoğlu 2017, p. 128). The implications of this point lead to a loop effect with algorithms where users often become and identify with the very thing the algorithm is pushing them toward. Panagia again states that “an algorithm is a dispositif not because it constrains freedom through various forms of domination, but because it proliferates controls on variability and, in this way, governs the movement of bodies and energies” (Panagia 2021, p. 128).
This idea connects to what Nadler et al. call the Digital Influence Machine (DIM). The DIM is a coordinated set of overlapping technologies involving “surveillance, targeting, testing, and automated decision-making to make advertising (commercial and political) more powerful and efficient” (Nadler et al. 2018, p. 1). These authors view the DIM as a form of weaponized information that allows advertisers or those with an agenda the ability to micro-target individuals who are believed to be the most receptive to particular kindsof messages. Importantly, the DIM also seeks to avoid inflaming those who would most likely negatively react to its influence. The DIM is stealth technology—it targets the most vulnerable and evades those who would most likely react negatively. What drives the DIM is a specific kind of algorithm working in the background, designed to produce outcomes without being explicitly noticed by those subject to its influence.

3.2. Weaponizing the DIM

On its own, the DIM is simply one part of a larger shift that has taken place within the information system. For Nadler et al., this shift has coincided with a change in the media and political landscape in three key areas: the decline in professional journalism and subsequent rise in citizen journalism; the expansion of financial resources devoted to political influence, including the landmark Citizens United court case which unleashed a torrent of dark money into the US political system; and the professionalization of sophisticated kinds of data mining operations like the infamous Cambridge Analytica of the 2016 US Presidential election in an environment with little democratic regulatory control (Nadler et al. 2018, pp. 1–2). In weaponizing the DIM, political actors use three main tactics. First, they mobilize supporters through identity threats. An example of this can be seen in the increasing focus among right-wing groups and politicians on Critical Race Theory and Diversity, Equity, and Inclusion (DEI) programs, which has prompted several US states to effectively ban discussions of race and privilege within public school systems and target DEI in both public and private contexts. Second, they seek to divide their opponents’ coalitions through activities like voter suppression or sponsoring disruptive counterdemonstrations. Other examples are finding and amplifying certain political actors who buck the status quo, such as Robert Kennedy, Jr., or Tulsi Gabbard. Finally, they utilize a growing body of research into influence techniques informed by psychology and behavioral science to manipulate and exploit individuals’ vulnerabilities. For example, shame can be used as a motivating factor, or peer pressure can be employed to nudge individuals toward the desired outcomes.
One interesting facet of the DIM is its scale and its actual ability to influence large numbers of people. If the goal is to modify individuals’ core beliefs fundamentally, the DIM is not ideally suited to accomplish this task. Instead, it operates around the margins, interacting with individuals and in a more subtle manner. Nadler et al. argue that
…the goals of weaponized DIM campaigns will be to amplify existing resentments and anxieties, raise the emotional stakes of particular issues or foreground some concerns at the expense of others, stir distrust among potential coalition partners, and subtly influence decisions about political behaviors (like whether to go vote or attend a rally). In close elections, if these tactics offer even marginal advantages, groups willing to engage in ethically dubious machinations may reap significant benefits.
This scenario was present in the last three US Presidential elections, where the victory margins were quite close, and the electoral college results hinged on tight races in a few swing states. To further explore the rise of the DIM and its potential to cause digital domination, I now want to focus on how some social media platforms use elements of the DIM in their newsfeed algorithms.

4. Dominated by Newsfeed Algorithms

4.1. The Facebook Newsfeed

In fall 2021, the Wall Street Journal published “The Facebook Files”, a series of stories based on an extensive data leak by whistleblower Frances Haugen (Horwitz 2021). These reports detailed Facebook’s knowledge that its platforms, Facebook and Instagram, caused harm to its users in several areas. Other news outlets also picked up on the story, and it eventually became known that Facebook had long been aware that its products could have detrimental effects on various groups of users, such as teenagers and individuals living in certain conflict zones where Facebook was used to stoke resentment and violence. Importantly, Facebook also knew that its products played a significant role in spreading disinformation such as falsehoods about COVID-19 vaccines and the integrity of the 2020 US Presidential election. To outside analysts, one of the main culprits was the introduction of new reaction emojis and a shift in how Facebook’s algorithm scored certain signals and data points generated by users interacting with the site. The new reaction emojis supplemented Facebook’s “like” function to allow users to express other kinds of emotions such as “love”, “hug”, “haha”, “wow”, “sad”, and “angry”. Coinciding with the introduction of the new emojis, Facebook also reprogrammed its algorithm, which controls what users see in their newsfeed, to take account of the new emojis to push specific signals into the data that increased more provocative and emotional content, including content designed to rile them up and make them angry. The data from the document leak showed that Facebook increased the signal from the new reaction emojis by up to five times as much as the original “like” button (Merrill and Oremus 2021).
Although it is a closely guarded trade secret, Facebook’s algorithm is thought to contain at least ten thousand inputs and signals, all of which are weighted in some fashion to produce a final score that ultimately generates what items appear on a user’s newsfeed, and, importantly, in what order those items appear. Due to its global popularity, the reach and penetration of the algorithm’s scoring system highlight just how powerful it can be in shaping users’ social and political attitudes, values, and beliefs. Of course, it is not only reaction emojis that significantly shape the algorithm controlling what appears in a user’s Facebook newsfeed. Also scoring high within the algorithm are what Facebook calls “meaningful user interactions”—things like commenting on, replying to, or sharing a post or news story (Merrill and Oremus 2021). Videos too, like the kind streamed live on January 6 by the Capitol insurrectionists or by protestors during the summer 2020 social justice protests, are another significant input signal to Facebook’s algorithm.
By opening user interaction by offering streaming opportunities, Facebook has increased the scope for toxic, misleading, or hateful content designed to get users engaged on the site. Another significant component of the Facebook algorithm that prioritizes posts that inspire these kinds of interactions is that it boosts the scores of conversations between family and friends. Facebook’s own research showed that meaningful social actions between friends and family members, especially over political content, made users interact more on the site (Merrill and Oremus 2021). What makes these kinds of interactions especially powerful is that there is often thought to be an implied level of trust between these users, and an “endorsement” through a meaningful interaction may carry significant weight. For example, if I am Facebook friends with my favorite aunt, Aunt Maude, and she reposts a news item, the Facebook algorithm may score it high in my feed so that I, too, can interact with it. And because this news item has been “endorsed” by someone I trust—my Aunt Maude—I am more likely not to question it. In other words, because Aunt Maude has recommended a post, I am more likely to trust it more than other news items since it comes with the endorsement of my favorite aunt and has been boosted higher in my newsfeed.
On its face, these kinds of interactions or reactions may well be benign—there is a good possibility that many of them are. However, what the Facebook Papers also reveal is that Facebook’s own internal research found that users had more meaningful interactions and reactions to controversial posts, thus, further incentivizing items designed to elicit strong reactions from users. An internal Facebook study found that news items that generated “the angry reaction emojis were disproportionately likely to include misinformation, toxicity, and low-quality journalism” (Merrill and Oremus 2021). In a speech to the British Parliament, Frances Haugen said that “anger and hate is the easiest way to grow on Facebook.” The upshot here is that, for many, the Facebook algorithm systematically amplifies content that many maintain is misleading, toxic, and divisive.
Arguably, juicing the algorithm to favor this kind of content is simply a business decision for Facebook. A 2014 Facebook experiment allowed users to control their newsfeed by turning the algorithm off to let the content appear chronologically. However, Facebook found that giving users this kind of control resulted in significantly less interaction on the site. Oremus (2021) found that “users spent less time on their page, posted less, and interacted less with others. The result was that users logged on less to Facebook, which imperiled user engagement that drives its profit model and lucrative advertising business.” Thus, it makes no business sense for Facebook to turn down the values of negative kinds of inputs to the algorithm. In fact, it seems like the opposite; it makes considerable business sense to turn these signals up in the algorithm. To paraphrase Safiya Noble (2018, p. 38), Facebook creates advertising algorithms, not information algorithms.
This sentiment is perhaps best understood as an artifact of what is now called the attention economy, where users’ attention is treated like a scarce commodity (Goldhaber 1997). For many, social media platforms are the vanguard of competing for and capturing users’ attention to sell them products, convince them of this or that point, and generally to own their time. Engagement is often one of the main currencies of the attention economy as platforms seek to accumulate users’ online attention by feeding them a constant stream of content that their algorithms have determined is more likely to keep the user absorbed. For Heitmayer (2024, p. 23), “In public life, ‘going viral’ or ‘shitstorms’ have become a commonly observed phenomenon where the self-reinforcing mechanism of the attention economy create a gravitational pull around a person, event, or piece of mediated content based on the amount of attention it has already received.”
This has given rise to a new category of entrepreneur, the social media influencer who uses their reach to endorse a product or idea to profit from their followers through various methods of platform monetization (Mei and Genet 2024, p. 23). Sometimes referred to as engagement farming, these users actively seek to use the flooded zone to exploit and manipulate social media algorithms to increase their reach, which in turn increases their revenue from social media platforms.
According to ZipRecruiter, a popular US-based employment marketplace, an average social media influencer earns over USD 100,000 annually (Clark 2024). Moreover, top influencers, like MrBeast or the Paul brothers, earn far more, with some, like football stars Cristiano Ronaldo or Lionel Messi, earning USD 2–3 million per post on Instagram. Many influencers build brands and, thus, spin their engagement to other industries like gaming, fashion, and lifestyle products. For example, TikTok star Charli D’Amelio earned USD 23 million in 2023 through her online presence, creating a clothing brand and appearing on various TV programs (Deen 2024). But influencers are not just entertainers or sports stars. Many focus on providing news and political content to their followers. A study by the Pew Research Center of the 2024 US election found that one-fifth of Americans, including almost forty percent of adults under 30, said they regularly obtained their news from social media influencers. Pew also found that most influencers had no affiliation with traditional news organizations and more explicitly identified as Republican or conservative than Democratic or liberal (Pew Research Center 2024). Notably, the study highlighted the role of monetization, with around sixty percent of influencers earning income from their online presence. Winning in the attention economy means winning in the engagement wars, and the quickest way to win is to game the algorithm. However, for social media platforms, this is precisely what they want—an influencer’s financial success is also their financial success. Thus, while influencers and social media personalities might have ideological reasons for gaming the algorithm, many also have serious financial incentives to practice engagement farming.
Not surprisingly, both right- and left-wing political actors and influencers in the US are all constantly in the top ten performing link posts on Facebook pages according to CrowdTangle, Facebook’s own monitoring platform (Hutchinson 2022). For many, these personalities and groups have been able to weaponize Facebook’s newsfeed to spread toxic and divisive agendas all aided by the Facebook algorithm. An analysis by Judd Legum detailed how a far-right website grew from scratch to becoming more popular than the Washington Post in just over a month (Legum 2022). A deep dive into the site shows that it was run by prominent right-wing figures who controlled a set of Facebook pages that automatically reposted DC Enquirer content, thus amplifying the signals being sent into the Facebook algorithm. The site also uses Facebook advertising to drive users to engage with the website. Legum found that when users clicked on ads consisting of petitions or surveys, they would begin following DC Enquirer’s Facebook pages, often without their knowledge or consent. So, the more clicks and followers for the DC Enquirer, the more toxic and divisive content is spread. For Facebook, the more ads, the more clicks, the more money. And, for users, the more interactions and shares, the more signals sent into the algorithm, thus boosting the post’s scores and making it more likely to appear at the top of other users’ newsfeeds, especially family and friends.

4.2. Twitter Becomes X

Of course, Facebook is not the only social media platform that uses an aggressive algorithm to curate users’ newsfeeds. Perhaps ironically, prior to acquiring X, in a series of posts, Elon Musk complained that “You are being manipulated by the algorithm in ways you don’t realize… I’m not suggesting malice in the algorithm, but rather that it’s trying to guess what you might want to read and, in doing so, inadvertently manipulate/amplify your viewpoints without you realizing this is happening” (emphasis added) (Musk 2022). X, unlike Facebook, allows users to control their newsfeed by a simple toggle—the algorithm curates the “For You” tab, whereas the “Following” tab is simply a reverse chronology of posts users follow. However, after acquiring X and acting as its CEO, Musk reportedly ordered his engineers to reconfigure the algorithm to boost his posts throughout the app so that they almost always appeared near the top of his followers’ newsfeeds. This was apparently carried out in response to posts from former President Biden that received far more engagement than his own (Newton and Schiffer 2023). But that was just the beginning of Musk’s manipulation of X’s algorithm. Following his formal endorsement of Donald Trump for President on July 13 (Schleifer and Mac 2024), a recent analysis by Graham and Andrejevic (2024) found evidence pointing to a “potential algorithmic adjustment that preferentially enhanced visibility and interaction for Musk’s posts.” X announced in December 2024 that they had updated the algorithm to make “posts you see more relevant (X Eng 2024).” Then, in January 2025, Musk posted another incoming tweak to the algorithm “to promote more informational/entertaining content” to reduce negativity and “to maximize unregretted user-seconds,” which is social media speak for how much time users spend on this or that platform (Musk 2025a). In wanting to keep users on his site, Musk has also stated on X that the algorithm seeks to limit users’ options to explore alternative sources of information by restricting posted links that may take users to other sources of information (Musk 2025b). The upshot here is that X’s algorithm is trying to win the attention economy by feeding users information that will keep them on X, thus limiting their options to have a more rounded information diet and making it more likely they will be manipulated by the algorithm.
Duran and Lakoff (2022) have suggested that Musk’s actions amount to “algorithm warfare” and created a “social media echo chamber that forces everyone to engage the worst of extreme conservative ideas in order to make such ideas seem mainstream.” Musk’s admission and subsequent manipulation of the algorithm highlight how much users are being influenced in often unknown ways regarding what they are being exposed to, pushing them this way or that. And, perhaps not surprisingly, multiple studies have found that X users are being pushed in a definitive direction by the algorithm, which in turn has prompted an EU investigation into X’s algorithm (O’Carroll 2025). Huszár et al. (2022) found that the X algorithm, in the mean/median case, offered greater amplification to right-leaning news outlets than left-leaning news outlets. In a similar vein, a 2017 study of hate speech and racism in Australia found that user manipulation of some platforms’ content rules, including X, combined with their algorithm-based recommendation systems, led to the amplification of “platformed racism” (Matamoros-Fernández 2017). A study by Bandy and Diakopoulos (2021) also maintained that X’s algorithm does play at minimum, “a minor supporting role in shifting media exposure for users, especially considering upstream factors that create the algorithm’s input—factors such as human behavior, platform incentives, and content moderation.” In an analysis in the runup to the 2024 US Presidential race, the Wall Street Journal found that “X users with interests in topics such as crafts, sports, and cooking [were] being blanketed with political content and fed a steady diet of posts that lean toward Donald Trump and that sow doubt about the integrity of [the election] (Gillum et al. 2024).” Taken as a whole, these studies’ findings mirror those conducted into Facebook’s algorithm, demonstrating that controversial posts, including those that espoused hate speech and toxicity, are often amplified in users’ newsfeeds. It follows, then, that from a modern republican perspective, this can be taken as the very kind of arbitrary interference that contributes to forms of digital domination.

4.3. Flood the Zone with Shit

To further understand how this might be the case, let us turn back to Pettit. In a piece analyzing Robert Dahl’s classic discussion of power, Pettit distinguishes between congenial and uncongenial control over individuals’ personal choices (Pettit 2008a, pp. 71–2). The distinction between these two positions is that congenial control leaves the agent with what Pettit calls “can-do” options—options the agent believes are possible and over which they retain control. Pettit links this form of control with reasoning and deliberation, which can be shared among agents in a spirit of mutual respect and collaboration. Alternatively, uncongenial control removes the “can-do” options from the agent through the use of interference, invigilation, and/or inhibition, resulting in the agent losing a fair amount of control over their choices (Pettit 2008a, p. 72). Thus, in a situation of uncongenial control, an agent has less power to act because their choices have been interfered with in a manner that does not track their interests. For modern republicans, this is precisely what leads to domination. In the case of newsfeed algorithms, users’ choices are being arbitrarily interfered with to the extent that their “can-do” choices are being manipulated and influenced, often without their knowledge and ability to control them.
This point ties into one that I made earlier, namely that the DIM is stealth technology characterized by a specific kind of algorithm micro-targeting users by manipulating their choices and exploiting their vulnerabilities. Described by Karen Yeung as hypernudging, this kind of manipulative micro-targeting hones in on users’ weaknesses to exploit their vulnerabilities (Yeung 2016). Manipulation, in this case, implies that hidden or covert influences are conditioning users’ choices through the algorithm’s recommendation system that targets their weaknesses to keep them engaged on social media platforms (Susser et al. 2019, p. 26). For Pettit, “Manipulation denies you the possibility of making a choice on the basis of a proper understanding of the options on offer” (Pettit 2012, pp. 54–55). Relatedly, manipulation often involves deception in the form of disinformation amplified by an algorithm, another sign of potential domination. The upshot of this onslaught is the fulfillment of Steve Bannon’s infamous quote to “flood the zone with shit” as a way to deceive, distract, and disorient individuals who may be left bewildered and confused about existing democratic institutions and practices (Illing 2020).
Consider the role disinformation played in the violent disorder that occurred in the UK in the wake of the tragic stabbing of three young children attending a Taylor Swift dance class in Southport in July 2024. News of the attack spread quickly throughout the UK, with one social media user posting on LinkedIn
My two youngest children went to holiday club this morning in Southport for a day of fun only for a migrant to enter and murder/fatally wound multiple children. My kids are fine. They are shocked and in hysterics but they are safe. My thoughts are with the other 30 kids and families that are suffering right now. If there anytime to close the borders completely it’s right now! Enough is enough.
Even though the poster eventually deleted the post, other users posted screenshots on different social media platforms, thus ensuring that the news would spread. According to BBC Verify, a fact checking team tasked with investigating the veracity of video clips and social media posts, the original post had been viewed more than two million times within a few hours of the attack. Like a social media equivalent of a game of telephone, the story began to evolve as more “details” were added by subsequent users, culminating with posts claiming that the suspect was Ali-Al-Shakati, an asylum seeker who had arrived in the UK by boat in 2023. In the week following the attack, hundreds of protests and demonstrations followed, some of which turned into violent riots with attacks aimed at religious and ethnic minorities. However, the information circulated on social media in the hours after the attack was inaccurate. The attacker was not an asylum-seeking migrant but rather Axel Rudakubana, a British-born teenager with a history of mental illness and violence. In January 2025, Rudakubana was found guilty of the murders and sentenced to fifty-two years in prison (Halliday 2025).
In a report released in fall 2024, Ofcom, the UK’s media regulator, determined that “Illegal content and disinformation spread widely and quickly online following the attack” and that “There was a clear connection between online activity and violent disorder seen on UK streets” (Ofcom 2024). In an open letter, the head of Ofcom stated
Accounts (including some with over 100,000 followers) falsely stated that the attacker was a Muslim asylum seeker and shared unverified claims about his political and ideological views. Posts about the Southport incident and subsequent events from high-profile accounts reached millions of users, demonstrating the role that virality and algorithmic recommendations can play in driving divisive narratives in a crisis period.
Moreover, Ofcom found that social media platforms were also used to post and disseminate calls for violent action as a response to the attack. Importantly, Ofcom identified “closed groups comprising thousands of members” being hosted on social media platforms as one of the main drivers of the spread of disinformation and subsequent violent disorder. Posts within these groups encouraged racial and religious hatred “and provoke[ed] violence and damage to people and property, including by identifying potential targets for damage or arson.” This finding highlights another feature of digital domination: the rise in echo chambers and filter bubbles. I turn to this issue next.

4.4. Echo Chambers and Filter Bubbles

The moves to flood the zone with shit highlight a further problem with these kinds of algorithms. The types of low-quality, toxic, and highly partisan disinformation pushed into users’ feeds by algorithms can create powerful echo chambers and filter bubbles. A recent report by Harvard University’s Shorenstein Center for Media, Politics and Public Policy (Baum et al. 2017) found that:
Current social media systems provide a fertile ground for the spread of misinformation that is particularly dangerous for political debate in a democratic society. Social media platforms provide a megaphone to anyone who can attract followers. This new power structure enables small numbers of individuals, armed with technical, social or political know-how, to distribute large volumes of disinformation, or “fake news.” Misinformation on social media is particularly potent and dangerous for two reasons: an abundance of sources and the creation of echo chambers.
What this points to is that for many users, there is a self-reinforcing function in echo chambers where they find themselves cleaved off into sometimes close-knit networks of like-minded individuals. Once in the group, participants goad each other into extreme positions by avoiding meaningful critiques of their beliefs, thus solidifying their fixed preferences and often driving these out to the extremes. Moreover, social media platforms that shield users from opposing viewpoints and ideas tend to push those users’ views out to ideological extremes.
A study by Johnson et al. (2020) found that the longer users spend on Facebook the more they become polarized: “social network homophily and algorithmic filtering constrain the information sources that individuals choose to consume, shielding them from opinion-challenging information and encouraging them to adopt more extreme viewpoints” (Kitchens et al. 2020, pp. 1619–20). The study also found that Facebook was more likely to polarize conservatives more than liberals and that the “increased use of Facebook was associated with increased information source diversity and a shift toward more partisan sites in news consumption” (Kitchens et al. 2020, pp. 1619–20). In an op-ed summarizing their research, the authors place the blame for the creation of the Facebook echo chamber squarely at the feet of its algorithm (Johnson et al. 2020). Tracking these findings, another study led by Sandra González-Bailón suggests that Facebook “is substantially segregated ideologically” (González-Bailón et al. (2023). The report highlighted three significant findings. First, the authors found that ideological segregation is significant. Second, between conservative and liberal audiences, there was an asymmetry with a significant echo chamber occupied exclusively by conservatives. Finally, and worryingly, the study found that news items flagged as disinformation by Facebook’s own fact checkers tended to cluster within this echo chamber.2 Not surprisingly, Facebook is not alone in this regard. A separate study into YouTube’s algorithm produced similar results. Researchers found that recommendations deep within the algorithm’s suggestions to right-leaning users “come from extremist, conspiratorial, and otherwise problematic channels” (Haroon et al. 2023).
The dangers for democracy from echo chambers have concerned theorists since the inception of social media platforms. As far back as the mid-1990s, Robert Putnam described them as the cyberbalkanization of the internet (Putnam 2000, pp. 177–78). Similarly, Jürgen Habermas suggested that “the rise of millions of fragmented chat rooms across the world tend instead to lead to the fragmentation of large but politically focused mass audiences into a huge number of isolated issue publics” (Habermas 2006, pp. 423 fn. 4). For modern republicans this drift is deeply concerning. One significant danger here is the risk that echo chambers may drive polarized groups into factions. Members of factions tend to place their own narrow self-interest above that of the political community (Maynor 2009). A study by Adamic and Glance (2005) found that there is a risk that within echo chambers, positions and preferences often harden and become more entrenched. Thus, to the extent that newsfeed algorithms amplify and promote extreme views, the risk of creating factions through echo chambers rises. A related challenge is that the same technology that allows users access to unlimited information sources also encourages them to disregard and avoid those that may be critical of their own viewpoints (Harmon 2004). Often called filter bubbles, many social media algorithms curate information based on tracking an end user’s previous internet use and ranking outputs consistent with those likely to be most favored, thus driving users deeper into their echo chambers (Kaluža 2022, p. 269). This can lead to another danger, described by Benkler et al. (2021) as a dangerous lack of shared reality, since echo chambers often lack conflicting information, leading to some falsehoods not being challenged.
The dangers brought about by this lack of shared reality are a considerable challenge for modern republicanism as it is likely to undermine democratic norms such as deliberation, which is a basic feature of the tradition (Sunstein 1988). Skinner points out that republicanism stresses dialog and “a willingness to negotiate over rival institutions concerning the applicability of evaluative terms” (Skinner 1998, pp. 15–16). This connects to what he refers to as the watchwords of republicanism, “audi alteram partem, always listen to the other side” (Skinner 1998, pp. 15–16; also see Pettit 1997, p. 189). Following Mutz (2006, pp. 84–86), listening to the other side can help cultivate increased social harmony, political tolerance, and higher levels of social trust. However, echo chambers and filter bubbles seem to do the opposite—for many, they undermine social harmony by being repositories of extremism, they decrease political tolerance by being full of toxicity and hate, and they can shatter social trust by increasing political polarization by pushing individuals to the ideological extremes. Thus, for users in today’s social media environment, why listen to the other side if the other side has been manipulated by algorithms and is trapped in an echo chamber? However, if the other side has been manipulated by algorithms and is trapped in an echo chamber, could the same not be said about me? Why should the other side trust me any more than I trust them? And ultimately, this is the real danger associated with echo chambers and filter bubbles: not that they exist in and of themselves, but that they may play a driving role in further eroding levels of social trust among citizens and entrenching dangerous political polarization.
In fact, there is a growing view among scholars who argue that while echo chambers and filter bubbles might have detrimental effects on democratic norms, it is not for many of the reasons highlighted above.3 Rather, this thesis centers on the thought that echo chambers and filter bubbles may be detrimental not necessarily because they isolate partisans from one another, but because they facilitate political self-sorting and amplify conflict. For Törnberg (2022, p. 10):
Polarization on digital media is driven by conflict rather than isolation, affording a form of politics rooted in identity rather than opinion. Digital media intensifies polarization, not as a sorting machine, but as a fueling runaway social process that destabilizes plural societies by drawing more and more issues into a single expanding social and cultural divide.
Bruns (2019) has also argued forcefully that the threat of echo chambers and filter bubbles is a myth and is a distraction from actually addressing the very real issues endangering democracy. This, for Bruns, is the inability of polarized political groups to engage with one another to cultivate mutual understanding and foster democratic consensus, phenomena that he maintains predate the rise in digital technology and the growth of social media.
What these writers are suggesting is not necessarily that echo chambers and filter bubbles are not harmful to democratic societies, but rather, they may be symptoms of a larger disease that must be addressed. Namely, rigid partisans entrenched in polarized groups who “dislike, even loathe” one another and who are, in part, motivated by “owning the other side” (Iyengar et al. 2012). Törnberg argues that it is here that digital media may be having an impact, in that it has disturbed the integrative mechanisms of plural societies, exacerbating and potentially accelerating societal fragmentation. The result is “a maelstrom in which additional identities, beliefs, and cultural belonging become sucked into a growing and all-encompassing societal division, which threatens the very foundation of social cohesion” (Törnberg 2022, p. 10).

5. Epilog

5.1. Regulate, Protect, and Empower

So, there are considerable headwinds for any state as it struggles to withstand the arbitrary interference caused by algorithms contributing to the rise in disinformation and echo chambers. It may seem like the zone is, in fact, flooded—that the traditional dikes and levees of democratic institutions and values have been breached. However, one thing modern republicanism has going for it is its resiliency. Earlier, following Pettit, I pointed out one way to think about liberty as nondomination is as a form of antipower. This means that some forms of interferences are tolerated, namely those that are not arbitrary and contribute to living free from domination.
Above, I mentioned that these kinds of interferences—the kinds that might emanate from modern republican institutions and ideals—help to secure liberty as nondomination and are indicative that republican liberty has a kind of resiliency to it. Expanding on this idea, Brennan and Hamlin (2001, p. 47) argue that the “idea of resilience is related to the idea of assurance—a resilient liberty is one that is assured in the sense that it is not contingent on circumstances but rather is entrenched in the institutional structure.” It follows that modern republican liberty should be understood as a resilient core of protection that allows individuals to determine which ends they will pursue within the context of nondomination. Two forms of antipower are at play in realizing this kind of resilience: institutional and individual forms.

5.2. Modern Republican Antipower Revisited

The main source of antipower can be found within a polity’s institutions and policies. By utilizing traditional republican technologies like checks and balances, the dispersion of power across a range of legislative, administrative, and judicial levels, democratic contestation, and active civic engagement, modern republicans seek to regulate the resources of the powerful (Maynor 2003, 2006). Pettit (1996, pp. 590–92) highlights three key areas that can promote the antipower associated with liberty as nondomination. It can be found in protective institutions that ensure a fair and just system of laws. It can also be found in regulatory institutions that seek to curtail the accumulation of power and diffuse power hierarchies. Finally, the third can be found in empowering institutions that work to empower individuals to give them, in the words of Pettit, echoing Sen, “equality of basic capabilities” (Pettit 1996, p. 591). While this third point is primarily institution-based, its effects will likely contribute to developing an individualized form of antipower to help agents fight against domination.
For Pettit, it is not enough to have choices for action; these choices must be meaningful, and an agent must possess the capability of making choices independently of another’s will. Thus, the “focus is on the need to have the state promote functioning capabilities, not just functioning prospects. The aspiration, quite rightly, is to get rid of dependency, not just destitution” (Pettit 2001, p. 19). And it is precisely this position of not being dependent on the will of another that modern republicans value, whether that dependency is on another agent or agency. Moreover, structural impediments (such as poverty) to an individual’s well-being may prevent that person from enjoying the status of being free from domination. In other words, to the extent that I am not able to enjoy basic capabilities, taken by Sen (1993, p. 41) to be the ability to “achieve valuable human functionings,” then I am in a position of domination and not able to stand on an equal footing with others.
There are other reasons to understand the potential overlap between modern republicanism and the capabilities approach through the presence of antipower in empowering institutions. These can be seen in three notable advantages individuals enjoy when they are free from domination that are likely to boost their basic capabilities (Maynor 2003, pp. 43–8). First, individuals may experience a reduction in anxiety or uncertainty that they may encounter from those who seek to dominate them. As Pettit (1997, p. 85) points out, the presence of modern republican liberty lowers individuals’ exposure to arbitrary interference since it tracks their interests. A second advantage of individual antipower is that it reduces the degree to which individuals must be in a defensive stance. Put simply, not having to be on constant defense secures the equal footing individuals enjoy because they do not have to anticipate arbitrary interference and protect themselves from its effects (Pettit 1997, p. 86). The status of being on an equal footing as others and having that status be common knowledge points to a third kind of individual antipower that allows individuals to feel confident and assured of their status. For Pettit, “they can look the other person in the eye: they do not have to bow and scrape” (Pettit 1997, p. 87). This allows them to extend a degree of social trust and civility toward others whom they know to be on an equal footing with them.
While any individual antipower individuals gain from empowering institutions helps secure them from domination, institutional forms of antipower are also working to promote their status as free through protective and regulatory institutions. Democratic institutional structures that counter arbitrary interference allow individuals to bring their interests out into political forums to be accounted for and tracked by others and the state (Maynor 2019). Thus, in the first instance, social media platforms must be further regulated to minimize the vulnerability to domination among users. According to Muldoon and Raekstad (2022), while algorithms might be sources of domination, this is not an intrinsic feature. Analyzing how algorithms may contribute to the domination of gig workers, the authors argue that, under certain conditions, it may be possible for algorithms to minimize domination:
We contend that democratic ownership and control of productive assets and equal rights of participation in decision-making over the conditions and processes or work could transform the role of algorithms at work and enable the beginning of a broader discussion about how democratic collectives should control the use of technology in the workplace.
Tying back to my earlier discussion on how algorithms may become nondominating, the authors suggest that if agents can have meaningful control over their inputs, they will likely have better outcomes from their actions. In other words, if the algorithm can be programmed with the right kind of non-arbitrary inputs—namely, those that maintain “can-do” choices for users—it may well be capable of minimizing domination. Another solution would be to require social media companies to have more transparency in their algorithms and to allow users to have far more control over what appears in their newsfeeds, including being able to turn it off altogether. As pointed out earlier, while Facebook does not currently have this feature, other platforms like X give users more control over what is in their newsfeeds and how items appear. However, even in the “Following” feed on X, users are still subject to the algorithm as it pushes posts utilizing a range of features such as “in case you missed it,” promoted posts, liked posts, conversation threads, and “who to follow” invitations, in addition to the algorithmic ordering of replies.
A more far-reaching path for a modern republic is to change how users own and control their data. Roberta Fischli (2022) has argued for the need for what she terms a “data-owning democracy” underpinned by modern republican liberty as nondomination. Fischli suggests that states create public infrastructures to allow citizens more of a say in how their data is collected and used. Moreover, she maintains that citizens ought to have easy access to their data flows through a digital wallet that will keep track of all the data users have allowed to be collected by third parties. Both recommendations are rooted in the recent efforts of the EU to give users more say in how the data they generate is used through the General Data Protection Regulation (European Union 2016). The thought here is to use public resources and policy regulation to empower individuals to help them fight back against domination by stressing four kinds of antipower. The first is that data-owning democracy reduces the dependencies through the creation of “spaces where collective data and democratic oversight are possible” and the possession of secondary rights over their data by citizens. The second is “by strengthening citizens’ control over their data flows, they enjoy new possibilities of political participation and self-determination.” Third, owning data empowers users to dispose of their data how they see fit, including commodifying it to sell or transfer. Finally, for Fischli, a data-owning democracy opens potential new forms of collective action through the proliferation of data cooperatives (Fischli 2022, pp. 216–17). Thus, giving users more control over how they interact with the algorithm will give them more meaningful choices and power over their actions since their “can-do” choices are congenial and intact.
However, even if possible, resolving the potential domination resulting from social media algorithms may be the least concerning threat from the growing reliance on algorithms in the digital environment. Far more worrying is the extent to which digital tools like algorithmically driven large language models and artificial intelligence threaten to change many fundamental facets of modern life. There is a growing body of literature that has identified a new wave of “algorithmic-based coordination as a form of governance that intervenes into society and culture through shaping constructions of social reality” (König 2019, p. 468). Described by König as an Algorithmic Leviathan, in areas like criminal justice, education, traffic, and health care, governments are increasingly relying on algorithms to direct resources and control behavior. For example, automated traffic control systems may alter traffic light timing to more efficiently manage changes in volume. Likewise, algorithms may play a role in determining the allocation of education or health resources to produce specific desirable outcomes collectively. And while some of these uses, like the ones above, may be quite easy to spot and thus control, other uses may operate at the micro-level and thus may be harder to regulate. Think of these like nudges, a concept promoted by Thaler and Sunstein that refers to “any aspect of the choice-architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives” (Thaler and Sunstein 2008, p. 6). The upshot of this approach is that individuals’ choices can be conditioned by how those choices are presented. Consider the recent efforts in some US states to shift from an opt-in choice architecture to an opt-out one when registering voters through the Motor Voter initiatives (National Conference of State Legislators 2024).
While nudges may be seen as a small but benign force affecting individuals’ choices, at scale and without robust oversight, there is the possibility that they introduce a fair amount of arbitrariness into the greater Algorithmic Leviathan (Creel and Hellman 2022). Not surprisingly, a paramount concern among modern republicans is whether these kinds of moves make it likely that states become dominators themselves along the lines outlined in Section 2. To the extent that the state can use its regulatory power to minimize potential domination, there is also a risk that it can use algorithms to reinforce dominant hierarchies of power. Moving forward, this will be one of the stress tests for modern republics—do they have the necessary resilience to foster institutional antipower to reduce individuals’ exposure to arbitrary interference, or will they become systemic dominators themselves?
Worryingly, at this moment, in the flooded zone, it is unclear whether modern republican resiliency can hold. In large part, the answer to this question will also involve how individuals can build up stores of individual antipower. To build this kind of antipower, individuals must up their game when engaging in social media and actively participate in reducing their exposure to domination. The upshot from both forms of antipower is that users must retain a fair degree of control over the sources and nature of interference they encounter, particularly in determining whether that interference is arbitrary. This kind of intimate control constitutes a critical dimension of resilience—the more control individuals have, the more resilient their liberty is; conversely, the less control they have, the more vulnerable they are to domination. Thus, the extent to which individuals can exercise control in both an institutional and individual manner will largely determine how free they are and, ultimately, the fate of the modern republic.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

I would like to thank the editors for encouraging me to be a part of this project and for their support throughout the editorial process. I would also like to thank the anonymous referees for their invaluable comments, as well as the participants of the Political Philosophy and New Technologies panel at the 2025 Annual Meeting of the Midwest Political Science Association for their feedback. An early draft was published as a working paper in Saç (2023), Cumhuriyetçilik ve Cumhuriyetler as “(Dez)enformasyon Cumhuriyeti: Algoritmalar, Enformasyonun Silaha Dönüştürülmesi ve Tahakküm.”.

Conflicts of Interest

The author declares no conflict of interest.

Notes

1
In July 2023, Twitter officially changed its name to X. Going forward, I will use X to reflect this change, including any posts/studies/etc., that may have been written prior to the name change.
2
In January 2025 Facebook announced that it would end its third-party content moderation and move to a Community Note model (Kaplan 2025).
3
For a good overview of this debate see Papp (2023).

References

  1. Adamic, Lada, and Natalie Glance. 2005. The political blogosphere and the 2004 U.S. election. Paper presented at 3rd International Workshop on Link Discovery—LinkKDD ’05, Chicago, IL, USA, August 21–25. [Google Scholar]
  2. Bandy, Jack, and Nicholas Diakopoulos. 2021. Curating Quality? How Twitter’s Timeline Algorithm Treats Different Types of News. Social Media + Society 7: 205630512110416. [Google Scholar] [CrossRef]
  3. Baum, Matthew, David Lazer, and Nicco Mele. 2017. Combating Fake News: An Agenda for Research and Action. Shorensteincenter.org. Available online: https://shorensteincenter.org/combating-fake-news-agenda-for-research/ (accessed on 5 November 2024).
  4. Benkler, Yochai, Robert Faris, Hal Roberts, and Ethan Zuckerman. 2021. Study: Breitbart-Led Right-Wing Media Ecosystem Altered Broader Media Agenda. Columbia Journalism Review. Available online: https://www.cjr.org/analysis/breitbart-media-trump-harvard-study.php (accessed on 5 May 2023).
  5. Bischoff, Paul. 2023. Internet Censorship 2020: A Global Map of Internet Restrictions—Comparitech. Comparitech.com. Available online: https://www.comparitech.com/blog/vpn-privacy/internet-censorship-map/ (accessed on 6 June 2025).
  6. Brennan, Geoffrey, and Alan Hamlin. 2001. Republican Liberty and Resilience. Monist 84: 45–59. [Google Scholar] [CrossRef]
  7. Bruns, Axel. 2019. Are Filter Bubbles Real? Cambridge: Polity Press. [Google Scholar]
  8. Clark, Meredith. 2024. These Are the Social Media Influencers with the Highest Net Worths in the US. The Independent. Available online: https://www.the-independent.com/life-style/influencers-net-worth-tiktok-b2550620.html (accessed on 2 February 2025).
  9. Creel, Kathleen, and Deborah Hellman. 2022. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems. Canadian Journal of Philosophy 52: 26–43. [Google Scholar] [CrossRef]
  10. Deen, Safed. 2024. Messi the Mega Influencer: Brands Love His 500 Million Followers and Down-to-Earth Persona. Available online: https://www.usatoday.com/story/sports/soccer/2024/03/10/messi-makes-millions-social-media-inter-miami-star-safe-bet-for-brands/72842402007/ (accessed on 5 November 2024).
  11. Duran, Gil, and George Lakoff. 2022. Algorithm Warfare: How Elon Musk Uses Twitter to Control Brains. FrameLab. Available online: https://www.theframelab.org/algorithm-warfare-how-elon-musk-uses/ (accessed on 28 April 2025).
  12. European Union. 2016. General Data Protection Regulation (GDPR). Available online: https://gdpr-info.eu/ (accessed on 10 January 2025).
  13. Fischli, Roberta. 2022. Data-owning democracy: Citizen empowerment through data ownership. European Journal of Political Theory 23: 147488512211103. [Google Scholar] [CrossRef]
  14. Gillum, Jack, Alexa Corse, and Adrienne Tong. 2024. Exclusive | X Algorithm Feeds Users Political Content—Whether They Want It or Not. WSJ. Available online: https://www.wsj.com/politics/elections/x-twitter-political-content-election-2024-28f2dadd (accessed on 5 November 2024).
  15. Goldhaber, Michael. 1997. The attention economy and the Net. First Monday 2. [Google Scholar] [CrossRef]
  16. González-Bailón, Sandra, David Lazer, Pablo Barberá, Meiqing Zhang, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Deen Freelon, Matthew Gentzkow, Aandrew Guess, and et al. 2023. Asymmetric Ideological Segregation in Exposure to Political News on Facebook. Science 381: 392–98. [Google Scholar] [CrossRef]
  17. Gosztonyi, Gergely. 2023. The Practice of Restricting Internet Access Before the European Court of Human Rights or New Tools of Political Censorship. In Censorship from Plato to Social Media. Law, Governance and Technology Series. Cham: Springer, vol. 61. [Google Scholar] [CrossRef]
  18. Graham, Timothy, and Mark Andrejevic. 2024. Tech Billionaire Elon Musk’s Social Media Posts Have Had a ‘Sudden Boost’ Since July, New Research Reveals. The Conversation. Available online: https://theconversation.com/tech-billionaire-elon-musks-social-media-posts-have-had-a-sudden-boost-since-july-new-research-reveals-242490 (accessed on 5 November 2024).
  19. Habermas, Jürgen. 2006. Political Communication in Media Society: Does Democracy Still Enjoy an Epistemic Dimension? The Impact of Normative Theory on Empirical Research. Communication Theory 16: 411–26. [Google Scholar] [CrossRef]
  20. Halldenius, Lena. 2010. Building Blocks of a Republican Cosmopolitanism. European Journal of Political Theory 9: 12–30. [Google Scholar] [CrossRef]
  21. Halliday, Josh. 2025. Axel Rudakubana: From ‘Unassuming’ Schoolboy to Southport killer. The Guardian. Available online: https://www.theguardian.com/uk-news/2025/jan/25/axel-rudakubana-from-unassuming-schoolboy-to-notorious-southport-killer (accessed on 10 January 2025).
  22. Harmon, Amy. 2004. Ideas & Trends; Politics of the Web: Meet, Greet, Segregate, Meet Again. The New York Times, January 25. Available online: http://www.nytimes.com/2004/01/25/weekinreview/25harm.html?ex=1390366800&en=1f841e0f15b6538f&ei=5007&partner=USERLAND (accessed on 5 November 2024).
  23. Haroon, Muhammad, Magdalena Wojcieszak, Anshumah Chhabra, Xin Liu, Prasant Mohapatra, and Zubair Shafiq. 2023. Auditing YouTube’s recommendation system for ideologically congenial, extreme, and problematic recommendations. Proceedings of the National Academy of Sciences USA 120: e2213020120. [Google Scholar] [CrossRef]
  24. Heitmayer, Maxi. 2024. The Second Wave of Attention Economics. Attention as a Universal Symbolic Currency on Social Media and beyond. Interacting with Computers 37: 18–29. [Google Scholar] [CrossRef]
  25. Horwitz, Jeff. 2021. The Facebook Files. Wall Street Journal, September 15. Available online: https://www.wsj.com/articles/the-facebook-files-11631713039 (accessed on 5 November 2024).
  26. Huszár, Ferenc, Sofia Ira Ktena, Conor O’Brien, Luca Belli, Andrew Schlaikjer, and Moritz Hardt. 2022. Algorithmic amplification of politics on Twitter. Proceedings of the National Academy of Sciences USA 119: e2025334119. [Google Scholar] [CrossRef] [PubMed]
  27. Hutchinson, Andrew. 2022. Meta Releases New ‘Widely Viewed Content’ Report for Facebook, Which Continues to be a Baffling Overview. Social Media Today. Available online: https://www.socialmediatoday.com/news/meta-releases-new-widely-viewed-content-report-for-Facebook-which-contin/619645/ (accessed on 28 April 2025).
  28. Illing, Sean. 2020. ‘Flood the Zone with Shit’: How Misinformation Overwhelmed Our Democracy. Vox. Available online: https://www.vox.com/policy-and-politics/2020/1/16/20991816/impeachment-trial-trump-bannon-misinformation (accessed on 10 January 2025).
  29. Iyengar, Shanto, Gaurav Sood, and Yphtack Lelkes. 2012. Affect, Not Ideology: A Social Identity Perspective on Polarization. Public Opinion Quarterly 76: 405–31. [Google Scholar] [CrossRef]
  30. Johnson, Stephen, Brent Kitchens, and Peter Gray. 2020. Facebook Serves as an Echo Chamber, Especially for Conservatives. Blame Its Algorithm. Washington Post, October 26. Available online: https://www.washingtonpost.com/opinions/2020/10/26/facebook-algorithm-conservative-liberal-extremes/ (accessed on 22 February 2023).
  31. Kaluža, Jernej. 2022. Habitual Generation of Filter Bubbles: Why is Algorithmic Personalisation Problematic for the Democratic Public Sphere? Javnost—The Public 29: 267–83. [Google Scholar] [CrossRef]
  32. Kaplan, Joel. 2025. More Speech and Fewer Mistakes. Meta. Available online: https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/ (accessed on 3 April 2025).
  33. Kitchens, Brent, Stephen Johnson, and Peter Gray. 2020. Understanding Echo Chambers and Filter Bubbles: The Impact of Social Media on Diversification and Partisan Shifts in News Consumption. MIS Quarterly 44: 1619–49. [Google Scholar] [CrossRef]
  34. Koopman, Colin. 2022. The Political Theory of Data: Institutions, Algorithms, & Formats in Racial Redlining. Political Theory 50: 337–61. [Google Scholar] [CrossRef]
  35. König, Pascal. 2019. Dissecting the Algorithmic Leviathan: On the Socio-Political Anatomy of Algorithmic Governance. Philosophy & Technology 33: 467–85. [Google Scholar] [CrossRef]
  36. Legum, Judd. 2022. A Far-Right Website Created 36 Days Ago Is More Popular on Facebook Than the Washington Post. Popular.info. Available online: https://popular.info/p/a-far-right-website-created-36-days?utm_source=url (accessed on 28 April 2025).
  37. List, Christian. 2006. Republican freedom and the rule of law. Politics, Philosophy & Economics 5: 201–20. [Google Scholar] [CrossRef]
  38. Maheshwari, Sapna, and Amanda Holpuch. 2025. Why TikTok Is Facing a U.S. Ban, and What Could Happen Next. The New York Times, January 17. Available online: https://www.nytimes.com/article/tiktok-ban.html (accessed on 14 June 2025).
  39. Matamoros-Fernández, Ariadna. 2017. Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society 20: 930–46. [Google Scholar] [CrossRef]
  40. Maynor, John. 2003. Republicanism in the Modern World. Cambridge: Polity. [Google Scholar]
  41. Maynor, John. 2006. Modern Republican Democratic Contestation: A Model of Deliberative Democracy. In Republicanism in Theory and Practice. Edited by Hseult Honohan and Jeremy Jennings. London: Routledge. [Google Scholar]
  42. Maynor, John. 2009. Blogging for democracy: Deliberation, autonomy, and reasonableness in the blogosphere. Critical Review of International Social and Political Philosophy 12: 443–68. [Google Scholar] [CrossRef]
  43. Maynor, John. 2019. The New Modes and Orders of Disruption: Web 3.0 and Republican Resilience. Brolly: Journal of Social Sciences 2: 27–41. [Google Scholar]
  44. Megiddo, Tamar. 2020. Online Activism, Digital Domination, and the Rule of Trolls. Columbia Journal of Transnational Law 58: 394. [Google Scholar] [CrossRef]
  45. Mei, Maggie, and Corine Genet. 2024. Social media entrepreneurship: A study on follower response to social media monetization. European Management Journal 42: 23–32. [Google Scholar] [CrossRef]
  46. Merrill, Jeremy, and Will Oremus. 2021. Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation. Washington Post, October 26. Available online: https://www.washingtonpost.com/technology/2021/10/26/facebook-angry-emoji-algorithm/ (accessed on 22 October 2024).
  47. Muldoon, James, and Paul Raekstad. 2022. Algorithmic Domination in the Gig Economy. European Journal of Political Theory 22: 147488512210820. [Google Scholar] [CrossRef]
  48. Musk, Elon. 2022. X. Available online: https://twitter.com/elonmusk/status/1525612988115320838? (accessed on 28 April 2025).
  49. Musk, Elon. 2025a. Available online: https://x.com/elonmusk/status/1875355425601999255 (accessed on 28 April 2025).
  50. Musk, Elon. 2025b. Available online: https://x.com/elonmusk/status/1915806794393457034 (accessed on 28 April 2025).
  51. Mutz, Diana. 2006. Hearing the Other Side: Deliberative Versus Participatory Democracy. Cambridge: Cambridge University Press. [Google Scholar]
  52. Nadler, Anthony, Matthew Crain, and Joan Donovan. 2018. Weaponizing the Digital Influence Machine: The Political Perils of Online Ad Tech. Available online: https://datasociety.net/wp-content/uploads/2018/10/DS_Digital_Influence_Machine.pdf (accessed on 23 February 2025).
  53. National Conference of State Legislators. 2024. Automatic Voter Registration. www.ncsl.org. Available online: https://www.ncsl.org/elections-and-campaigns/automatic-voter-registration (accessed on 23 February 2025).
  54. Newton, Casey, and Zöe Schiffer. 2023. Yes, Elon Musk Created a Special System for Showing You All His Tweets First. The Verge. Available online: https://www.theverge.com/2023/2/14/23600358/elon-musk-tweets-algorithm-changes-twitter (accessed on 23 February 2025).
  55. Noble, Safiya. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. [Google Scholar]
  56. O’Carroll, Lisa. 2025. EU Asks X for Internal Documents About Algorithms as It Steps up Investigation. The Guardian. Available online: https://www.theguardian.com/technology/2025/jan/17/eu-asks-x-for-internal-documents-about-algorithms-as-it-steps-up-investigation (accessed on 23 February 2025).
  57. Ofcom. 2024. Open Letter to UK Online Service Providers. Available online: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/open-letter-to-uk-online-service-providers (accessed on 29 April 2025).
  58. Oremus, Will. 2021. Why Facebook Won’t Let You Control Your Own News Feed. Washington Post. Available online: https://www.washingtonpost.com/technology/2021/11/13/facebook-news-feed-algorithm-how-to-turn-it-off/ (accessed on 5 November 2024).
  59. Panagia, Davide. 2021. On the Possibilities of a Political Theory of Algorithms. Political Theory 49: 109–33. [Google Scholar] [CrossRef]
  60. Panagia, Davide, and Çağlar Köseoğlu. 2017. #datapolitik: An Interview with Davide Panagia. Contrivers’ Review. Available online: http://www.contrivers.org/articles/40/Davide-Panagia-Caglar-Koseoglu-Datapolik-Interview-Political-Theory/ (accessed on 4 April 2022).
  61. Papp, Janos. 2023. Recontextualizing the Role of Social Media in the Formation of Filter Bubbles. Hungarian Yearbook of International Law and European Law 11: 136–50. [Google Scholar] [CrossRef]
  62. Pettit, Philip. 1996. Freedom as Antipower. Ethics 106: 576–604. [Google Scholar] [CrossRef]
  63. Pettit, Philip. 1997. Republicanism: A Theory of Freedom and Government. Oxford: Oxford University Press. [Google Scholar]
  64. Pettit, Philip. 2001. A Theory of Freedom: From the Psychology of the Politics of Agency. Cambridge: Polity Press. [Google Scholar]
  65. Pettit, Philip. 2006. The Determinacy of Republican Policy: A Reply to McMahon. Philosophy Public Affairs 34: 275–83. [Google Scholar] [CrossRef]
  66. Pettit, Philip. 2008a. Dahl’s power and republican freedom. Journal of Power 1: 67–74. [Google Scholar] [CrossRef]
  67. Pettit, Philip. 2008b. Republican Freedom: Three Axioms, Four Theorems. In Republicanism and Political Theory. Edited by Cecile Laborde and John Maynor. Oxford: Oxford Blackwell. [Google Scholar]
  68. Pettit, Philip. 2012. On the People’s Terms: A Republican Theory and Model of Democracy. Cambridge and New York: Cambridge University Press. [Google Scholar]
  69. Pew Research Center. 2024. America’s News Influencers. Pew Research Center. Available online: https://www.pewresearch.org/journalism/2024/11/18/americas-news-influencers/ (accessed on 23 February 2025).
  70. Putnam, Robert. 2000. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster. [Google Scholar]
  71. Ratcliffe, Rebecca. 2021. Journalists Maria Ressa and Dmitry Muratov Receive Nobel Peace Prize in Oslo. The Guardian. Available online: https://www.theguardian.com/world/2021/dec/10/journalists-maria-ressa-and-dmitry-muratov-receive-nobel-peace-prize (accessed on 23 February 2025).
  72. Schleifer, Theodore, and Ryan Mac. 2024. Elon Musk Endorses Trump, Moments After Shooting at His Rally. The New York Times, July 13. Available online: https://www.nytimes.com/2024/07/13/us/politics/elon-musk-trump-endorsement.html (accessed on 23 February 2025).
  73. Saç, Selman. 2023. (Dez)enformasyon Cumhuriyeti: Algoritmalar, Enformasyonun Silaha Dönüştürülmesi ve Tahakküm. In Cumhuriyetçilik ve Cumhuriyetler. Ankara: Nika. [Google Scholar]
  74. Sen, Amartya. 1993. Capability and Well-Being. In The Quality of Life. Oxford: Oxford University Press, pp. 30–53. [Google Scholar] [CrossRef]
  75. Skinner, Quentin. 1998. Liberty Before Liberalism. Cambridge: Cambridge University Press. [Google Scholar]
  76. Skinner, Quentin. 2002. A Third Concept of Liberty. Available online: https://www.thebritishacademy.ac.uk/documents/1972/pba117p237.pdf (accessed on 23 February 2025).
  77. Stein, Perry. 2023. Twitter Says It Will Restrict Access to Some Tweets Before Turkey’s Election. The Washington Post. Available online: https://www.washingtonpost.com/technology/2023/05/13/turkey-twitter-musk-erdogan/ (accessed on 28 April 2025).
  78. Sunstein, Cass. 1988. Beyond the Republican Revival. The Yale Law Journal 97: 1539. [Google Scholar] [CrossRef]
  79. Susser, Daniel, Beate Roessler, and Helen Nissenbaum. 2019. Technology, autonomy, and manipulation. Internet Policy Review 8: 1–22. Available online: https://policyreview.info/articles/analysis/technology-autonomy-and-manipulation (accessed on 29 April 2025). [CrossRef]
  80. Thaler, Richard, and Cass Sunstein. 2008. Nudge: Improving Decisions Using the Architecture of Choice. New Haven and London: Yale University Press. [Google Scholar]
  81. Thomas, Ed, and Shayan Sardarizadeh. 2024. Southport riot: How a LinkedIn Post Helped Spark Unrest—BBC Tracks Its Spread. BBC, October 25. Available online: https://www.bbc.com/news/articles/c99v90813j5o (accessed on 23 February 2025).
  82. Törnberg, Petter. 2022. How Digital Media Drive Affective Polarization through Partisan Sorting. Proceedings of the National Academy of Sciences USA 119: e2207159119. [Google Scholar] [CrossRef] [PubMed]
  83. Tufekci, Zeynep. 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest. Yale: Yale University Press. Available online: https://d-nb.info/124031910X/34 (accessed on 23 February 2025).
  84. X Eng. 2024. Available online: https://x.com/XEng/status/1869527834844434877 (accessed on 28 April 2025).
  85. Yeung, Karen. 2016. ‘Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society 20: 118–36. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maynor, J. Flood the Zone with Shit: Algorithmic Domination in the Modern Republic. Soc. Sci. 2025, 14, 391. https://doi.org/10.3390/socsci14060391

AMA Style

Maynor J. Flood the Zone with Shit: Algorithmic Domination in the Modern Republic. Social Sciences. 2025; 14(6):391. https://doi.org/10.3390/socsci14060391

Chicago/Turabian Style

Maynor, John. 2025. "Flood the Zone with Shit: Algorithmic Domination in the Modern Republic" Social Sciences 14, no. 6: 391. https://doi.org/10.3390/socsci14060391

APA Style

Maynor, J. (2025). Flood the Zone with Shit: Algorithmic Domination in the Modern Republic. Social Sciences, 14(6), 391. https://doi.org/10.3390/socsci14060391

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop