1. Introduction
The expansion of algorithmic governance, big data analytics, and artificial intelligence (AI) has reconfigured the operation of power in contemporary society. Unlike the institutional surveillance that characterized Michel Foucault’s disciplinary societies, digital systems anticipate, sort, and modulate human behavior through predictive infrastructures. These developments call for a renewed engagement with Foucauldian theory, particularly regarding subjectivity and resistance.
This article proposes the concept of the algorithmically modulated subject—an individual formed not through institutional discipline, but through anticipatory computation. In contrast to earlier models that emphasized the internalization of norms within enclosed spaces, algorithmic power operates through continuous data flows that classify and act on individuals in real time.
While scholars such as Shoshana Zuboff have emphasized the economic logic of digital platforms, this study turns to the ontological dimension of algorithmic governance. It argues that contemporary systems do not merely influence choices but participate in shaping who we are and how we become intelligible within computational architectures. Drawing on Foucault, Deleuze, and Colin Koopman, this article reconsiders how subjectivity is produced and contested under digital conditions.
This analysis brings into focus the evolving nature of resistance within Foucauldian frameworks, where power and resistance are mutually constitutive. Algorithmic systems, by acting preemptively and through opaque infrastructures, challenge conventional modes of opposition. These conditions demand attention to emerging forms of resistance that target the epistemic and infrastructural dimensions of digital governance, rather than merely its outputs.
By integrating philosophy, media theory, and political thought, this article contributes a conceptual framework for understanding algorithmic control not just as a technical issue, but as a transformation of the human condition.
Any reconsideration of digital power involves engaging with the evolving forms of resistance that remain central to Foucauldian thought. As algorithmic systems reconfigure the conditions under which subjects are constituted, resistance takes shape not simply as a reaction but as a constitutive element within contemporary power relations.
2. From Discipline to Control: A Foucauldian Perspective
Foucault conceptualized disciplinary societies as environments where individuals were subjected to structured surveillance and normalization processes within institutions. These institutions functioned as mechanisms of power that trained, corrected, and disciplined subjects to conform to social norms. Schools, hospitals, prisons, and military barracks exemplify such disciplinary institutions, each with its own hierarchical structures, routine procedures, and mechanisms of observation designed to mold individuals into compliant members of society [
1] (p. 197). The disciplinary model operates by creating spatially confined domains where individuals are categorized, monitored, and conditioned to internalize regulatory norms. Through repeated exposure to structured discipline, individuals learn to regulate their own behavior, not merely due to the immediate presence of authority figures but because they absorb the logic of control itself, internalizing the expectation of constant oversight.
By this structured discipline, which is termed ‘panopticism’, Foucault illustrated how surveillance operates not merely as an external force but as an internalized mechanism of self-regulation, wherein individuals modify their own behaviors under the assumption that they are being watched [
1] (p. 201). This mode of control, exemplified by Jeremy Bentham’s Panopticon—a prison design in which inmates could never know when they were being observed—demonstrates the psychological effect of surveillance. In disciplinary societies, power is most effective when it is both invisible and omnipresent, leading individuals to act as if they are constantly monitored, even in the absence of direct observation, ensuring that norms are upheld through self-discipline, with surveillance functioning as a mechanism of governance. However, this disciplinary model of power, which functioned predominantly within enclosed institutional spaces, is increasingly supplanted by a more fluid and dispersed mode of control in the digital age.
Gilles Deleuze builds on Foucault to describe a shift from enclosed, institution-based power to “societies of control”, in which individuals are never fully contained but constantly modulated by digital systems [
2] (p. 5). Rather than moving through fixed phases—school, work, military—people now inhabit an uninterrupted circuit of data gathering, analysis, and intervention [
2] (p. 7). Digital platforms, biometric sensors, and predictive analytics monitor behavior in real time, directing choices and determining access to goods, services, and opportunities [
3] (p. 152). Power thus extends beyond walls and routines to permeate everyday networks, where algorithmic decision-making blurs the line between autonomy and constraint.
This transformation replaces spatial confinement with perpetual mobility: control operates through continuous tracking and fine-tuning of actions rather than one-off categorization [
2] (p. 6). Metrics like credit scores, social media rankings, and other algorithmic assessments constantly recalibrate individuals, embedding authority within everyday interactions [
4] (p. 21). Because these mechanisms are diffuse, opaque, and self-reinforcing, resisting them is far more complex than challenging a visible hierarchy: it demands both technical savvy and critical engagement with the infrastructures that invisibly shape our behavior [
5] (p. 43).
Where Foucault’s model relied on visible surveillance and correction within institutions, contemporary control uses automated, often imperceptible processes to steer behavior. This evolution compels us to rethink agency and subjectivity under algorithmic governance, as digital infrastructures increasingly dictate our experiences. Reclaiming autonomy demands new forms of resistance—whether through data activism, encryption tools, or regulatory measures—that confront power enacted through continuous modulation rather than discrete disciplinary acts.
3. Algorithmic Governance and the New Regime of Power
The rise of big data and artificial intelligence has enabled the expansion of algorithmic governance—data-driven systems that predict, influence, and regulate behavior without relying on traditional disciplinary institutions. These systems operate beyond the visible gaze of panoptic surveillance. Instead of correcting deviance through observation, they shape behavior in advance through incentives, nudges, and predictive modeling [
3] (p. 201) [
6] (p. 87).
A defining feature of algorithmic control is its opacity. As Frank Pasquale argues in The Black Box Society, these systems often function without transparency, making it difficult for individuals to understand, question, or contest the decisions made about them. This lack of accountability produces asymmetries of power in which governments and corporations act on individuals without providing them visibility into how they are being classified or evaluated [
4] (p. 19). As a result, questions of consent become increasingly complex, as individuals rarely have meaningful control over how their data is collected, analyzed, and utilized [
3] (p. 218). As Cathy O’Neil and Virginia Eubanks show, these tools frequently encode and reproduce structural biases, disproportionately targeting marginalized populations while appearing neutral and objective [
7] (p. 112) [
8] (p. 73).
Furthermore, the algorithmic turn in governance extends beyond surveillance to behavioral manipulation. Shoshana Zuboff’s concept of “surveillance capitalism” highlights how digital platforms like Google and Facebook monetize predictive data by shaping user behavior through targeted advertising and algorithmic curation [
3] (p. 94). This commodification of personal data transforms individuals into “data subjects”, whose actions and preferences are continuously harvested to optimize engagement and profitability [
9] (p. 57). Consequently, traditional notions of autonomy and free will are increasingly challenged by the pervasive influence of algorithmic decision-making, raising ethical concerns about digital self-determination and democratic participation [
5] (p. 67).
In this regime, power is dispersed across digital platforms, machine-learning models, and predictive infrastructures that operate continuously. Individuals become data profiles acted upon by systems they rarely see and cannot challenge, raising ethical and political questions about autonomy, classification, and resistance. Algorithmic governance thus represents a profound transformation in how subjectivity is produced, governed, and potentially contested. Responding to these transformations may involve interdisciplinary approaches that bridge philosophy, technology studies, and policymaking to envision more just and transparent digital governance models.
4. Biopolitics and Datafication of the Human Subject
Foucault’s notion of biopolitics, which describes the governance of life itself through statistical management and regulation, finds new resonance in the age of digital surveillance. The commodification of personal data, biometric tracking, and AI-driven decision-making extends biopolitical control beyond traditional state mechanisms into corporate domains [
10] (p. 139). As Foucault argued, biopolitics functions through the regulation of populations, using data to assess and control aspects of life such as health, reproduction, and labor [
10] (p. 143). Building on Zuboff’s insights into behavioral commodification, this section explores how such mechanisms intersect with Foucauldian biopolitics in producing datafied subjectivities. This shift highlights how biopolitical power has evolved from state-centered governance to a dispersed and decentralized system of corporate and algorithmic control.
An illustrative example of digital biopolitics is the pervasive use of biometric data in identity verification and security systems. Governments and corporations alike collect vast amounts of biometric information, including facial recognition, fingerprints, and even behavioral patterns, to monitor and regulate individuals [
11] (p. 12). While these technologies are often justified as tools for enhancing security and efficiency, they simultaneously erode privacy and personal autonomy [
6] (p. 76). The Chinese social credit system exemplifies algorithmic governance that enforces compliance through automated scoring and access regulation [
12] (p. 49). Yet similar logics operate in Western contexts as well—credit ratings, algorithmic hiring tools, and risk assessments increasingly influence access to jobs, services, and mobility [
9] (p. 45).
Another dimension of digital biopolitics is the algorithmic categorization of individuals based on their online activities. AI-driven decision-making systems process vast datasets to create predictive profiles that influence everything from creditworthiness to healthcare access [
4] (p. 29). Such black-boxed systems, as noted earlier, further complicate meaningful resistance. This form of data-driven governance extends Foucault’s concept of biopolitics by transforming human subjects into data points that can be sorted, ranked, and controlled based on algorithmic predictions [
8] (p. 81).
The implications of this transformation are profound. Traditional forms of biopolitical control relied on institutions such as hospitals, schools, and prisons to regulate populations, but contemporary digital infrastructures operate continuously, tracking individuals across multiple domains of life [
5] (p. 88). Moreover, the datafication of human subjects creates new forms of inequality, as marginalized communities are often disproportionately targeted by algorithmic surveillance and automated decision-making [
7] (p. 113). For example, predictive policing algorithms have been shown to reinforce racial biases by disproportionately flagging minority neighborhoods as high-risk areas, leading to over-policing and systemic discrimination [
13] (p. 97).
Such a shift to digital biopolitics also raises ethical and political questions regarding consent and resistance. Digital rights advocates emphasize the need for greater transparency, regulatory oversight, and individual agency over personal data to counterbalance the growing power of corporate and governmental surveillance [
14] (p. 23).
As biopolitics continues to evolve in the digital age, this invites examination of how datafication reconfigures human subjectivity. If, as Foucault argued, power operates through the production and management of life itself, then the contemporary era demands a reassessment of how algorithmic governance shapes individual autonomy, social inequalities, and political agency. By analyzing the intersection of biopolitics and digital infrastructures, this paper seeks to contribute to the ongoing discourse on data sovereignty, algorithmic accountability, and the future of human rights in an increasingly datafied world.
5. Implications for Autonomy and Resistance
The shift from institution-centered discipline to networked algorithmic control fundamentally alters both the conditions of autonomy and the terrain of resistance. In classic disciplinary settings—schools, prisons, hospitals—power was visible and localized, and individuals could act against identifiable authorities through legal appeals, demonstrations, or workplace actions [
3] (p. 215). Algorithmic control, by contrast, is dispersed across interlocking digital systems—recommendation engines, automated risk assessments, facial-recognition networks—so there is no single “site” to challenge. Authority here is exercised through embedded protocols and real-time interventions in daily interactions, making overt, centralized forms of contestation largely ineffective [
4] (p. 27).
This shift toward anticipatory modulation calls for further attention to how subjectivity is pre-structured by digital infrastructures. The idea of the algorithmically modulated subject corresponds with Colin Koopman’s account of the “informational person”. In How We Became Our Data, Koopman outlines how contemporary subjects are shaped through practices of data formatting that structure the conditions under which individuals become knowable and actionable within digital systems. These practices do not simply document individuals; they participate in the organization of subjectivity itself. From this perspective, resistance involves not only questioning specific algorithmic outcomes but also examining the informational infrastructures that shape how subjects are classified and engaged. Framing resistance in this way shifts the focus from reactive strategies to more foundational interventions that address the epistemic and infrastructural dimensions of algorithmic governance [
15].
Alongside Koopman’s genealogical account, Docherty’s study of Facebook offers a concrete illustration of how digital platforms modulate subjectivity through interface design and habitual engagement [
16]. Drawing on Foucault’s notion of “governance through habit”, Docherty shows how the platform encourages repetitive behaviors—liking, commenting, sharing—thereby scripting users into a normative model of “healthy” digital citizenship. Rather than enforcing overt discipline, the system operates by shaping behavioral expectations through interface affordances, reward cycles, and social cues. These mechanisms demonstrate how algorithmic systems govern not only through data classification but also by structuring the micro-routines of daily participation. In this sense, the algorithmically modulated subject is not just formatted by data flows but by embodied, platform-mediated practices that align behavior with platform profitability and algorithmic visibility.
In this light, autonomy entails not only countering specific algorithmic outcomes but also transforming the very informational architectures that render subjects legible and governable. Resistance thus becomes an epistemic and infrastructural project—one that interrogates classification schemas, disrupts data flows, and insists on transparent, accountable design.
The lack of transparency in AI-driven decision-making and predictive analytics leaves individuals largely unaware of the ways in which they are being monitored, classified, and influenced. Unlike traditional surveillance, where individuals might recognize the presence of cameras or the authority of an institution, algorithmic surveillance functions in ways that are often imperceptible to those being observed. Data is continuously extracted, analyzed, and utilized to shape decisions about employment, creditworthiness, law enforcement targeting, and even social visibility—all without clear mechanisms of accountability [
7] (p. 119). Because these systems rely on vast datasets and complex machine-learning models, their decision-making processes are often hidden, making it difficult for individuals to understand or contest the ways in which they are being categorized and governed.
While algorithmic governance transforms the modalities of power, it also reconfigures the very conditions under which resistance becomes possible. As Foucault [
13] (p. 95) reminds us, resistance is co-constitutive of power—but for resistance to emerge, power must first be recognized as such. This raises what might be called a Gramscian problem: algorithmic power often functions not through overt repression, but through normalization, embedding itself so seamlessly into everyday life that it no longer appears as power [
17] (pp. 12–13, 145–146). Individuals affected by algorithmic scoring or predictive sorting may remain unaware of these systems until the effects manifest elsewhere, such as a denied loan or an unexpected classification. Until such disruptions occur, the logic of governance remains largely invisible. This delayed or displaced recognition undermines the formation of critical awareness and the grounds for collective resistance. Rendering this latency visible is itself a political act—one that underpins many contemporary efforts at epistemic resistance [
15] (pp. 4–5).
Furthermore, this algorithmic shift complicates traditional forms of resistance. Individuals affected by algorithmic bias or unjust automated decisions often struggle to identify responsible parties, as decision-making authority is fragmented among AI developers, platform operators, government agencies, and corporate stakeholders. This decentralization diminishes opportunities for legal recourse and organized opposition [
4] (p. 27). Resistance must now be reimagined as distributed, strategic, and ongoing.
Scholars and activists have begun to develop resistance strategies that reflect this new configuration of power. Data obfuscation—deliberately introducing misleading or noisy data—is one such approach. Tools such as VPNs, encrypted messaging services, and browser extensions like TrackMeNot attempt to counteract surveillance by scrambling digital signals [
15] (p. 45). Similarly, algorithmic auditing has emerged as a means of holding systems accountable. Researchers reverse-engineer black-box models to expose hidden biases, as seen in the work of organizations like the Algorithmic Justice League [
8] (p. 137). These efforts seek to render algorithmic processes visible and challenge the epistemic asymmetry they reinforce [
4] (p. 39).
Decentralized privacy advocacy complements these strategies. Projects such as blockchain-based identity systems aim to give users control over their data. Alternative infrastructures—including decentralized social media platforms and privacy-focused search engines—work to reduce dependence on dominant platforms and resist the commodification of digital identities [
9] (p. 116). These initiatives reflect not only technological ingenuity but also a political effort to redesign the infrastructures that shape subjectivity [
3] (p. 231).
Legal and policy frameworks also play a role. The European Union’s General Data Protection Regulation (GDPR) and the now-in-force Artificial Intelligence Act seek to empower individuals with rights to transparency and contestation [
18] (p. 178). However, enforcement remains inconsistent, and the systems these laws target often evolve more quickly than regulation itself [
14] (p. 31). Consequently, preserving meaningful autonomy in an age of algorithmic modulation [
10] (p. 155) hinges on legal reform being paired with technological and cultural resistance.
Ultimately, resistance today is not only political or legal but ontological. It concerns the conditions of subjectivation—how we become intelligible as data-driven selves within algorithmic architectures. The challenge is not merely to oppose power but to reconfigure the logics through which power constitutes us.
6. Conclusions
This article has examined the evolution from Foucauldian disciplinary societies to Deleuzian control societies, highlighting the emergence of algorithmic governance as a new regime of power. In doing so, it has shown how digital infrastructures no longer simply observe or correct behavior within institutional enclosures but instead anticipate and shape subjectivity through continuous, data-driven modulation. This transition necessitates not only a recontextualization of Foucault’s theory of power but also a preliminary sketch of how the human subject is being reconstituted in the digital age.
Rather than internalizing norms through surveillance and discipline, the algorithmically modulated subject is governed by predictive systems that operate in real time, pre-structuring decisions, desires, and opportunities. Identity becomes an output of algorithmic anticipation, not introspection or institutional confession. This shift introduces ontological challenges to autonomy and resistance, which can no longer rely solely on legal or institutional redress but must engage the infrastructures that shape legibility and agency itself.
Thus, rethinking Foucault’s insights today invites a shift in focus—from exposing visible mechanisms of power to interrogating the subtle architectures that organize behavior, perception, and legibility. Rather than positioning resistance as a response to overt domination, this article foregrounds how subjectivity is continually shaped within environments governed by predictive operations and anticipatory logic. These dynamics blur the boundaries between participation and control, making contestation less about visibility and more about infrastructural conditions. In this light, the contribution of this article lies in reframing the political as a struggle over the constitution of the self within algorithmically modulated space, one that requires new theoretical tools for engaging with the architecture of governance itself.
Future research should expand this sketch by deepening the conceptual tools available for contesting algorithmic control and by integrating interdisciplinary insights from philosophy, media theory, and legal studies. This reorientation is essential for developing regulatory and ethical frameworks capable of confronting the infrastructural reshaping of the human condition in the digital era.
By reevaluating Foucault’s theory in the context of the digital era, this study not only contributes to contemporary debates on power and control but also lays the groundwork for future research into digital authority and its potential limitations.