1. Introduction
I argue transhumanism is not a radical rupture from humanism but its continuation—technologically intensified and corporately branded. The transhumanist vision of technologically enhanced humans does not escape the orbit of late-stage capitalism. Instead, it inherits and extends its core logic: commodification, individuation, and market capture.
The human, within this framework, is incubated and intubated by corporate parents—profit-driven entities like Apple
1, Meta
2, and Google
3. The question we now face is not merely “What kind of human will I become?” but “’Whose human am I?’ Are you an Apple human, a Meta human, or a Google human in the branded warfare for subjectivity and market supremacy?”
This metaphor—of humans as biotechnological offspring of platform capital—adds a vivid biopolitical edge to critiques already circulating in posthumanist theory. In my work (
Lockhart, 2025b), I develop the concept of cybernetic subjectivity—a mode of being governed by corporate systems that organize perception, behavior, and affect through data-driven infrastructures. This resonates with
Dean’s (
2002,
2010) analysis of neoliberal subject formation as a recursive feedback loop of optimization and performativity, as well as with
Schafheitle et al.’s (
2020) argument that datafication now operates as an infrastructure of organizational control.
What I offer here is not merely an extension of these critiques but a reframing: transhumanism as the incubation of human subjectivity within corporate infrastructures that act as techno-parental incubators and ventilators. This framing is supported by
Morozov’s (
2013) dismantling of Silicon Valley’s techno-solutionist mythology and by
Barbrook and Cameron’s (
1995) genealogy of transhumanism within the libertarian–neoliberal convergence they call the Californian Ideology. The enhanced human, then, is not an emancipated posthuman self but a cybernetic-neoliberal product engineered in branded ecologies, optimized for market alignment, and sold as freedom (
Hayles, 1999).
2. Background and Context
Before advancing my own argument, it is important to pause and trace a set of influential critiques developed by key figures in posthumanist theory. These interventions do not dismiss transhumanism wholesale. Rather, they interrogate the ideological underpinnings of transhumanism’s utopian promises of optimization, transcendence, and liberation through technology. What they reveal is that these promises are not neutral or novel; they are entangled with enduring assumptions about the human, the role of technology, and the logic of the market. Grasping the contours of these critiques is crucial, as my argument builds upon and extends them, particularly by drawing attention to the emergence of the branded, corporatized subject as a central figure in contemporary techno-futures.
2.1. Brief Historical Context
To situate this critique, it is important to recognize that transhumanism and posthumanism, despite sounding similar, emerge from fundamentally different genealogies. Transhumanism arose from Enlightenment humanism, Silicon Valley futurism, and techno-libertarian optimism. As
Morozov (
2013) has argued, Silicon Valley’s futurism is inseparable from its deeply embedded techno-solutionist worldview: an orientation that frames every social or political problem as a technological challenge to be optimized.
Barbrook and Cameron (
1995) analyze this orientation more ideologically, tracing it to the Cold War fusion of countercultural libertarianism and neoliberal market logic; a formation they call the Californian Ideology.
Edwards (
1996), meanwhile, places these developments in the longer arc of Cold War computing, showing how cybernetic systems thinking and military-sponsored technological infrastructures shaped the logics of control, abstraction, and technical mastery that underpin contemporary techno-futurist visions. Figures like
Kurzweil (
2005),
Bostrom (
2014), and
More (
2013) popularized its core vision: a future where human limitations (i.e., aging, disease, even death) can be overcome through technological enhancement. This vision often presumes a universal subject, but in practice it reflects the aspirations of a very particular kind of subject: affluent, able-bodied, Western, and male.
Posthumanism, by contrast, emerged from a critical tradition rooted in post-structuralism, feminist theory, ecological thought, and continental philosophy. Thinkers like
Haraway (
1991),
Braidotti (
2013), and
Wolfe (
2010) reject the idea of a stable, bounded human subject. Instead, they emphasize entanglement, vulnerability, and the de-centering of the human in favor of a more distributed understanding of agency across biological, technological, and environmental systems.
These two trajectories, one amplifying the liberal humanist subject through technology and the other deconstructing that very subject, do not merely offer competing definitions. They represent conflicting ontological and political commitments. And while posthumanism remains largely an academic critique, transhumanism is increasingly embedded in the infrastructure of everyday life, marketed through glossy visions of longevity, productivity, and hyper-efficiency. That shift–from discourse to system, from idea to interface–is where my argument begins.
2.2. Brief Terminological Glossary
To ensure clarity as we move forward, I want to briefly define some key terms that frequently appear in this discussion. Understanding these concepts is crucial for grasping the nuances of the critique I develop.
Biopolitics: The governance of life itself, where power operates through controlling bodies, health, and populations. This concept explains how corporate and technological systems regulate human existence in subtle and pervasive ways.
Capitalism, late-Stage: A term describing the advanced phase of capitalism characterized by high financialization, corporate monopolization, commodification of almost all social life, and global market dominance.
Cybernetics: An interdisciplinary science focused on systems, feedback loops, and control in machines and living organisms underpinning many techno-futurist visions and critiques of subjectivity as governed by recursive information flows.
Datafication: The process by which social and biological phenomena are transformed into quantifiable data, enabling new forms of monitoring, control, and commodification within digital infrastructures.
Enlightenment: The intellectual and cultural movement that emerged in Europe during the 17th and 18th centuries, emphasizing reason, science, individual autonomy, and progress, which laid the foundation for modern humanism but has been critiqued for promoting a narrow, anthropocentric view of the human subject.
Humanism: The Enlightenment-rooted belief in the human as a rational, autonomous individual, the center of moral and intellectual life, premised on human exceptionalism and supremacy over ecosystems and all organisms.
Neoliberalism: An economic and political framework emphasizing free markets, individual responsibility, competition, and privatization, which shapes contemporary ideas of selfhood as entrepreneurial and self-optimizing.
Platform Sovereignty: A concept describing how tech giants like Apple, Google, and Meta act not merely as companies but as new forms of governance through their control of digital infrastructures, data flows, and social behaviors.
Posthumanism: A critical theoretical perspective that challenges humanism’s assumptions by questioning the fixed boundaries of the human, emphasizing relationality, hybridity, and the decentering of the human subject within a network of nonhuman agents, whether machines, animals, or ecological systems.
Singularity: A theoretical future moment when artificial intelligence will exceed human intelligence, triggering exponential technological growth and transforming civilization in unpredictable ways.
Stack: A conceptual model describing planetary-scale computational infrastructure as layered, interconnected systems shaping governance, technology, and subjectivity.
Subjectivity: The condition or quality of being a subject, often understood as the way individuals experience and interpret themselves and the world. In critical theory, subjectivity is seen as socially constructed and politically influenced.
Techno-solutionism: The belief that technological innovation can provide straightforward solutions to complex social, political, and ecological problems, often ignoring deeper systemic causes.
Technological Instrumentalism: The view that technology is a neutral tool to be used for human ends.
Zoe: A term drawn from Greek, meaning the force of life itself, used in posthumanist theory to contrast with bios (structured, socially recognized human life), referring instead to a trans-species, nonhuman vitality that underpins all life and emphasizes planetary solidarity across species.
By grounding the discussion in these terms, I aim to situate my argument clearly within ongoing debates while making it accessible to readers less familiar with this specialized vocabulary.
2.3. Humanist Continuity
One of the most incisive critiques emerging from posthumanist thought is that transhumanism does not represent a rupture from humanism; rather, it intensifies and retools its foundational commitments. Far from dismantling the figure of the rational, autonomous, mastery-driven subject, transhumanism reinforces it—only now outfitted with neural implants, biometric sensors, and biotech enhancements. The Enlightenment subject, long critiqued for its exclusions and abstractions, reappears here in upgraded form: not deconstructed, but fortified.
Building on this insight, thinkers such as
Braidotti (
2013) and
Wolfe (
2010) argue that transhumanism sustains the fantasy of human supremacy—over nature, over the body, and even over mortality itself. In this view, the transhumanist subject is not a departure from humanism but its perfection: a retooling, not a disruption. This continuity forms a core part of my own critique. I contend that transhumanism operates not as a radical break, but as a techno-utopian intensification of the liberal humanist legacy. It does not escape the figure of the autonomous subject; rather, it rebrands it as augmented and optimized, but still tethered to systems of control and deeply embedded in corporate infrastructures.
This continuity is exemplified in the work of key transhumanist thinkers.
Kurzweil’s (
2005) vision of the Singularity envisions a frictionless human future in which mortality and fallibility are technologically eliminated—a fantasy of pure transcendence updating Enlightenment ideals amid digital acceleration.
Bostrom (
2014), while attentive to existential risk, nonetheless frames the future of humanity through a managerial, optimization-driven logic.
More (
2013) explicitly links transhumanist enhancement to libertarian values of autonomy, competition, and personal self-mastery.
Together, these visions amplify humanist logic within the ideological apparatus of neoliberalism (
Bostrom, 2014;
Kurzweil, 2005;
More, 2013). What is enhanced is not simply the body or mind, but the market-oriented, self-optimizing subject: competitive, modular, and governable. As
Fiesler (
2020) notes, governance structures embedded in technological design shape and constrain users under the guise of choice.
Lindtner et al. (
2014) further reveal how spaces of supposed innovation (such as hackerspaces and maker cultures) reproduce neoliberal ideals through techno-entrepreneurial models of selfhood and labor.
By contrast, posthumanist theory, as articulated by
Haraway’s (
1991) cyborg theory, offers a more disruptive critique. Her cyborg ontology emphasizes hybridity, partiality, and entanglement. Rather than extending humanist logic, she envisions a non-liberal posthumanism rooted in relationality, situated knowledge, and resistance to commodified technoculture. This intervention foregrounds a technogenesis that resists the market’s imperative to enhance and optimize. Rather than perfecting the human, it gestures toward forms of subjectivity that elude control, commodification, and corporate temporalities (
Haraway, 2016;
Hathaway, 2020).
2.4. Technological Instrumentalism
Another critical concern articulated by posthumanist scholars centers on how transhumanism conceptualizes technology as a neutral, controllable instrument serving human ambitions. This framing, however, glosses over the intricate entanglements between technology, politics, economics, and power structures; and, crucially, how technology in turn shapes human subjectivities and social relations.
Haraway (
1991), in her seminal
Cyborg Manifesto, disrupts both utopian and dystopian narratives by advancing a relational, hybrid ontology. She envisions human and machine not as distinct, hierarchical entities but as co-constitutive assemblages, mutually shaping one another.
2.5. Anthropocentrism and Relationality
A further critique advanced by posthumanist theorists targets transhumanism’s entrenched anthropocentrism, specifically its relentless focus on enhancing the isolated human subject, often severed from ecological entanglements, other species, and planetary systems. In this frame, the human is positioned as both the primary agent and the ultimate beneficiary of technological progress. Transhumanism remains committed to seeking to make humans more (i.e., more rational, more optimized, more enduring) rather than rethinking what it means to live in relation to nonhuman others.
By contrast, posthumanist thinkers such as
Haraway (
1991),
Braidotti (
2013), and
Wolfe (
2010), foreground relationality, interdependence, and affective entanglement.
Haraway (
1991), in her notion of becoming-with, pushes beyond critique to envision new multispecies ontologies based on kinship, partial connections, and shared vulnerability.
Braidotti (
2013) challenges the centrality of the human by advocating for a posthuman ethics rooted in zoe-centric equality and ecological accountability.
Wolfe (
2010), drawing from systems theory and philosophy, deconstructs the illusion of the self-contained subject and stresses the constitutive role of nonhuman forces.
Building on these interventions, I argue that transhumanism does not merely overlook relationality; it actively suppresses it (
Hayles, 1999;
Kurzweil, 2005). The figure it valorizes is a corporate self: optimized, datafied, and owned. Its vision of the future is not collective or ecological but atomized and proprietary (
Hayles, 1999). This suppression of relationality finds a further analogue in how cognition itself is reimagined in posthumanist thought, one of
Hayles’s (
1999) central arguments.
Parisi (
2013) deepens the critique of anthropocentrism by theorizing algorithmic cognition as a form of posthuman intelligence that emerges not from human rationality but from nonconscious, computational processes. In her account, cognition is no longer exclusive to human minds; it becomes a distributed function of code, data, and architecture that resists anthropomorphic assumptions.
Schank (
2019,
2020) extends this line of critique by showing how these same algorithmic infrastructures commodify behavior and fragment ethical integrity. His work demonstrates how digital systems displace stable identities and relational accountability, creating environments in which integrity is not merely neglected but systematically eroded by design.
Collectively, these thinkers enact a decisive break from transhumanism’s enhancement-driven individualism. Instead of pursuing mastery over the body or the environment, the posthumanist orientation insists on interdependence, ecological embeddedness, and non-sovereign forms of subjectivity. It is in this space of becoming-with, of algorithmic decentering, of techno-ethical vulnerability that alternatives to transhumanist futures begin to emerge.
2.6. Neoliberal Subjectivity
2.6.1. Neoliberal Self and Enhancement-as-Commodity
Perhaps the most pressing social critique of transhumanism is its reproduction of the neoliberal subject: an individual who is self-responsible, relentlessly self-optimizing, fiercely competitive, and deeply compliant with market imperatives. Enhancement technologies thus shift from mere tools to consumer choices, transforming the body and mind into perpetual sites of productivity, investment, and commodification. This embeds individual bodies within market logics, making self-improvement a form of economic participation shaped by neoliberal values.
2.6.2. Datafied Subject and Platform Capitalism
Scholars such as
Dean (
2002,
2010) have revealed how digital platforms capture and responsibilize desire, reinforcing neoliberal ideals through feedback, surveillance, and behavioral control.
Braidotti (
2013) critiques this entrepreneurial self as a distortion of posthumanist thought, where selfhood becomes a project of continuous enhancement rather than relational becoming. Corporate techno-governance intricately shapes bodily autonomy by embedding surveillance, feedback, and algorithmic control into everyday biometric and data-driven practices, effectively transforming individuals into datafied subjects whose behaviors and desires are continuously monitored, responsibilized, and regulated within neoliberal infrastructures (
Dean, 2002,
2010;
Fiesler, 2020;
Lupton, 2016;
Pasquale, 2015;
Schank, 2019,
2020).
2.6.3. Cyborg: A Critical Counterpoint
Haraway’s (
1991) cyborg theory offers a critical counterpoint to this neoliberal trajectory. Rather than accepting the bounded, entrepreneurial self, she proposes a hybrid ontology that resists binary logics such as organic versus technological, human versus machine, and self versus other, exposing how technological embodiment can be either complicit in or resistant to capitalist modes of control (
Haraway, 1991,
2016;
Hathaway, 2020). Crucially, her cyborg is not a self-branding subject of market desire but a politically situated figure capable of subverting dominant narratives. Yet transhumanism, by contrast, evacuates this radical potential by folding hybridity into commercial logics of enhancement, optimization, and proprietary control.
2.6.4. Obscured Inequalities and Algorithmic Governance
Dean (
2002,
2010),
Braidotti (
2013), and I (
Lockhart, 2025b) have exposed how this logic obscures systemic inequalities, as enhancement remains a privilege accessible only to those with earned or inherited social and economic capital, leaving others behind both materially and symbolically. A dense body of scholarship further supports this critique:
Schank (
2019) illustrates how algorithmic governance enforces compliance by turning bodies into disciplined data points;
Cooper (
2008) frames biotechnology as speculative capital within neoliberal markets;
Lupton (
2016) documents how self-tracking extends neoliberal selfhood through embedded surveillance and discipline; and
Pasquale (
2015) highlights opaque corporate algorithms that predict and manipulate behavior, further entrenching power asymmetries.
2.6.5. Transhumanism as Incubator of Neoliberal Subjectivity
Building on this extensive evidence, I argue that transhumanism does more than mirror neoliberal subjectivity; it actively incubates and intensifies it. The future human, far from being merely
enhanced, is
licensed,
leased,
branded, and, ultimately
owned—a subscription model wrapped in flesh, no longer a
sovereign self but a
commodified vessel engineered for corporate extraction and recorded as a line item in the ledger of transhuman capital (
Contractor et al., 2022;
Cooper, 2008;
Ericson et al., 2021;
Fiesler, 2020;
Lindtner et al., 2014;
Lupton, 2016;
Parisi, 2013;
Pasquale, 2015;
Sadowski, 2020;
Schank, 2019,
2020). The fantasy of the self-improving posthuman is not one of liberation, but of logistics: a techno-cultural hallucination sustained by the algorithmic machinery of corporate power and market capture.
2.7. Disambiguation of “Posthuman”
The term posthuman is frequently appropriated in conflicting ways, generating confusion and ideological slippage. Transhumanist thinkers such as
Kurzweil (
2005),
Bostrom (
2014), and
More (
2013) envision the posthuman as a technologically enhanced, perfected being who transcends biological limits, essentially extending humanist ideals through technology. Conversely, posthumanist theorists, including
Haraway (
1991),
Braidotti (
2013),
Wolfe (
2010), and
Parisi (
2013), articulate a different conception. For them, the posthuman is not a superior human but a decentered, distributed, and relational figure embedded within ecological and technological assemblages. This perspective challenges essentialist and progressivist assumptions, disrupting binaries like human/machine and nature/culture.
This fundamental distinction is critical for positioning my own intervention. To clarify ongoing misappropriation,
Table 1 provides a comparative overview of these divergent usages of posthuman in transhumanist and posthumanist perspectives.
3. Illustrative Cases and Extrapolated Futures
Transhumanism sells a future of longer life, sharper minds, and seamless human–tech integration. But this future is already locked down, monopolized by corporate platforms that control not only our data, but also our desires, identities, and even our bodies. The enhanced human is not free; they are fed, ventilated, and updated by their corporate parents.
Imagine a branded transhuman whose biometric systems are linked to a subscription they can no longer afford. Their neural enhancements begin to lag. Notifications pile up. Then the ventilation halts, not because of a medical failure but because a payment was not processed. In the world transhumanism builds, existence itself becomes a service tier.
Another sits before a forced End User License Agreement (EULA), blinking notice suspended at “I Agree.” Declining means immediate disconnection from their social neural interface. No messages from their children. No shared memories. No presence in the only space where family now gathers. To refuse is to vanish. To agree is to surrender legal rights to their thoughts, behaviors, and biometric expressions. They consent, not out of will, but out of grief and necessity.
These critiques make it clear that transhumanism is not a neutral vision of the future. It is a project with philosophical, political, and economic stakes. These stakes are shaped by market and profit imperatives, enabled by regulatory loopholes, and perpetuated through ongoing corporate governance failures. What remains less examined, and what I aim to foreground in the following section, is how these dynamics are not only discursive but also infrastructural. Branded technologies, extractive platforms, and datafied systems are already reshaping what it means to be human. Corporate platforms now govern futures and desires through infrastructural and algorithmic control. They transform individuals into commodities and data points within sprawling ecosystems of surveillance and capital extraction (
Bostrom, 2014;
Edwards, 1996;
Ericson et al., 2021;
Haraway, 1991;
Kurzweil, 2005;
Schank, 2019).
To ground these claims in concrete analysis, this section presents three illustrative cases of platform-human entanglement. Each case follows a three-part structure:
Present dynamics of corporate-platform interaction,
Future projections that extrapolate from these dynamics, and
Critical analysis grounded in relevant theoretical literature.
The cases are organized around three emblematic figures: the Apple Human, the Meta Human, and the Google Human. Each reveals how corporate giants operationalize branded technologies to capture biosubjectivity, regulate affect and behavior, and preconfigure human futures under the logics of extraction and control. The cases are not mutually exclusive, but complementary, and a concluding synthesis will draw together their shared contributions to understanding how corporate branding shapes human subjectivity, kinship, and democratic life.
3.1. The Apple Human
The Apple Human exemplifies how biometric tracking and health integration have become deeply embedded in everyday life, shaping identity and control through branded ecosystems.
3.1.1. Present: Biometric Tracking and Health Integration
Today’s Apple Human is defined by biometric tracking and intimate health integration. Wearable devices continuously monitor heart rates, sleep patterns, oxygen levels, and a cascade of other bodily metrics. The body becomes a quantified self; an endlessly measured, optimized, and fed back into corporate data streams. This quantified self is not simply about personal health; it forms the foundation of branded biosubjectivity, where identity and value are constructed through subscription-based health ecosystems.
Lupton (
2016) demonstrates how self-tracking technologies shape bodily experience and governance, merging health data with identity in ways that obscure structural inequalities.
Sadowski (
2020) exposes the data infrastructures beneath these devices, showing how they serve as sites of both capital accumulation and behavioral discipline.
Cooper (
2008) further critiques the health-tech sector by revealing how the commodification of life processes extends neoliberal market logics into the intimate realm of bodily existence.
Media accounts trace this shift from niche biohacking to mainstream normalization. Early adopters who turned their bodies into “medical labs” illustrate the origins of the quantified self movement (
Guardian Staff, 2012). The transformation of biometric monitoring into a ubiquitous everyday practice is well documented (
MedTech Pulse Staff, 2022). Critical exposés reveal how Apple’s techno-utopian narratives conceal a broader authoritarian potential embedded within its infrastructure (
Atlantic Staff, 2024;
New Yorker Staff, 2019).
3.1.2. Future: Branded Biosubjectivity via Subscription Ecosystems
In the near future, the Apple Human becomes enmeshed in a seamless, yet deeply controlling, ecosystem where life itself is commodified as a service. Beyond mere health tracking, biometric data will be integrated into an elaborate subscription model that governs access to personalized health insights, predictive diagnostics, and life-enhancing medical interventions. The body ceases to be a sovereign entity and transforms into a continuous data stream flowing into proprietary corporate platforms designed to extract surplus value under the guise of wellness and optimization.
Algorithmic systems will monitor and modulate bodily states in real time, nudging users towards prescribed behaviors to maximize health outcomes, but primarily, platform profits. Emotional states and stress levels, derived from biometric signals, will be tracked and subtly shaped, aligning personal well-being with corporate goals. As
Cooper (
2008),
Lupton (
2016), and
Sadowski (
2020) argue, this digital capitalism of biosubjectivity extends market logics into intimate corporeal experience, making human life itself a site of ongoing economic extraction.
Subscription tiers and exclusive licensing agreements will create new inequalities, where access to critical health services and interventions depends on the capacity to pay and comply with opaque EULAs.
Pasquale’s (
2015) critique of the black box algorithms illustrates how these systems operate beyond user scrutiny, encoding systemic biases and reinforcing socioeconomic disparities. Moreover, EULAs and behavioral use licenses will constrain autonomy, normalizing consent to pervasive data collection and behavioral regulation.
Ericson et al. (
2021) emphasize how such contracts shape user attitudes and limit meaningful choice, while
Contractor et al. (
2022) warn that emerging AI governance frameworks may entrench corporate control rather than challenge it.
The Apple Human’s future will see the blurring of boundaries between care and control, wellness and surveillance. Health ecosystems will act as infrastructures of power governing not just bodies but emotions, identities, and social participation. In this future, continued existence depends not only on biology but on the capacity to pay, subscribe, and comply with corporate demands. The enhanced human is not liberated or empowered but trapped within a system that commodifies life itself, turning bodies into platforms and existence into a service controlled by corporate interests.
3.1.3. The Apple Human: Critical Analysis
The Apple Human exemplifies how biometric and health tracking technologies serve as instruments of governance under digital capitalism. This integration of bodies into corporate data ecosystems is not simply about health optimization but constitutes a profound reshaping of subjectivity through branded biosubjectivity and subscription-based control.
Lupton’s (
2016) work on self-tracking highlights how these technologies mediate bodily experience while obscuring broader social inequalities.
Sadowski’s (
2020) critique of data infrastructures reveals how seemingly personal health data fuels capital accumulation and disciplinary mechanisms.
Cooper’s (
2008) analysis of life commodification within health tech further underscores the neoliberal extension of market logics into intimate corporeality, transforming bodies into sites of continuous extraction.
Pasquale’s (
2015) concept of black box algorithmic governance is crucial for understanding how opaque systems enforce these controls, embedding biases and reinforcing structural inequalities beneath the veneer of technological innovation. The enforcement of restrictive EULAs, as
Ericson et al. (
2021) demonstrate, limits meaningful user autonomy and consent, entrenching corporate power.
Contractor et al. (
2022) offer a critical perspective on AI governance frameworks, warning that behavioral use licensing may reinforce existing corporate monopolies rather than enable ethical oversight or user empowerment. This future governance model risks further alienating users, binding their biological existence to corporate subscription services.
This critique aligns with broader concerns in surveillance capitalism as detailed by
Zuboff (
2019), who shows how intimate data is commodified and behavioral futures sold back to consumers. The Apple Human’s transformation echoes
Terranova’s (
2004) arguments about the exploitation of emotional and cognitive labor in digital economies, and
Berlant’s (
2011) insights on affective attachments reveal the political stakes of these technological dependencies. Further enriching this analysis,
Sedgwick (
2003) and
Ahmed (
2004) explore how affect circulates culturally and politically, while
Massumi (
2002) and
Seigworth and Gregg (
2010) reveal the preconscious dynamics of affect that underpin these embodied governance systems.
Hardt and Negri (
2000) connect these mechanisms to neoliberal global power structures, and
Probyn’s (
2005) examination of shame and vulnerability deepens understanding of the emotional regulation embedded within corporate platforms. Together, these critiques illuminate the Apple Human as a figure trapped within a corporate ecosystem that commodifies life itself, transforming health and identity into algorithmically mediated products, and ultimately reinforcing socio-economic hierarchies under the guise of wellness and technological progress.
3.2. The Meta Human
The Meta Human reveals how immersive digital platforms govern affect, behavior, and social realities through algorithmic control.
3.2.1. Present: Oculus and Horizon Worlds
The Meta Human emerges through immersive platforms like Oculus and Horizon Worlds, where affective experience and social interaction are governed by algorithmic design. Meta’s virtual environments do more than connect users; they capture emotional responses, shape behavior, and mediate political discourse on an unprecedented scale.
Dean (
2010) highlights how immersive platforms capture affect, transforming human emotions into data for continuous extraction.
Pasquale (
2015) reveals the black box society within these digital spaces, where opaque systems determine interaction and visibility.
Morozov (
2013) critiques Meta’s techno-utopian aspirations, warning that beneath the promise of immersive freedom lies a latent infrastructure of authoritarian control.
Media investigations document the real-world consequences of Meta’s governance.
The New York Times details Facebook’s facilitation of the Capitol riot
4, showing how the platform’s shifting policies, beginning with Zuckerberg’s 2016 apology addressing misinformation and culminating in Meta’s 2025 decision to end its fact-checking program, enable political mobilization that ultimately destabilizes democratic processes (
Frenkel & Isaac, 2025). Similarly, Google Maps complied with the U.S. administration’s executive order by renaming the Gulf of Mexico as the “Gulf of America” for U.S. users, reflecting how platform design responds to and enforces political mandates in geographic representation (
Davies, 2025;
Zhuang, 2025).
The Guardian demonstrates how Facebook’s algorithm amplifies authoritarian content, shaping collective affect and political attitudes (
Hill, 2024).
Whistleblower reports expose internal research revealing harm to teen mental health caused by unchecked design choices in Meta’s immersive systems (
Hern, 2021). Scholarly analysis confirms that Meta governs emotional, political, and economic behavior at a platform-wide scale (
Huang & Krafft, 2024).
Table 2 presents the official record of 14 federal oversight hearings conducted by House and Senate committees between 2019 and 2025 document recurring failures by social media platforms, including Meta, to implement effective misinformation and disinformation controls; these proceedings are detailed in
Table 2.
Table 2 demonstrates the limits of governmental accountability; in parallel, journalistic investigations by
The New Yorker (
New Yorker Staff, 2019),
The Atlantic (
Atlantic Staff, 2024), and other outlets situate these failures within the larger convergence of platform capitalism and techno-authoritarianism. Meta’s virtual environments do not simply host social life; they manufacture it, embedding behavioral norms and affective cues into every interface. These systems function as infrastructures of governance, operating under the guise of innovation while advancing the political economy of surveillance and control.
The examples of Facebook’s facilitation of the Capitol riot, Google Maps’ renaming of the Gulf of Mexico to the “Gulf of America,” and whistleblower revelations about Meta’s internal research collectively reveal a platform governance model that operates through the active shaping of political, social, and emotional realities. Facebook’s amplification of divisive political content destabilizes democratic processes by directly fostering polarization and authoritarian attitudes, while Google’s compliance with renaming geographical landmarks demonstrates how digital platforms can extend state power by embedding nationalist narratives into everyday user experiences. Meanwhile, the neglect exposed in content moderation and mental health impacts shows how corporate imperatives override public welfare, commodifying user behavior and emotion. Synthesizing these instances illustrates that Meta and its allied tech platforms function not only as facilitators of social interaction but as infrastructures of techno-authoritarian control underpinned by surveillance capitalism, with profound implications for democracy, identity, and social justice. This synthesis underscores the urgency of interrogating these platforms’ governance frameworks not just as technological choices but as deeply political acts shaping societal norms and power relations.
3.2.2. Future: Algorithmic Intimacy and Affective Control
Meta’s platforms capture your heartache, your pleasure, your pain, your folly, depersonalize your emotions, your experiences, your life, and then sell them back as personalized content. One person’s unique suffering becomes everyone’s shared burden, transformed into a commodified currency that shapes how millions feel and behave. Individuality fades while collective vulnerability swells.
Inside immersive worlds like Horizon Worlds, your relationships, your politics, and your desires become data points routed through platforms that watch and score your feelings in real time. Your self shrinks to an interface, your moods tracked, your preferences nudged, and your choices shaped by predictive systems designed to maximize engagement, compliance, and profit.
Dean’s (
2010) analysis of feedback loops shows how these platforms turn affect into raw material for endless extraction, while
Pasquale’s (
2015) concept of the black box society reveals the hidden power of algorithms governing your inner life.
Meta’s affective control extends beyond individual nudging to full-spectrum governance of emotional life. Meta’s systems read your gaze, your voice tone, and your biometric signals to make your subjectivity programmable and governable. Political dissent or emotional difference, for example, a muted reaction in a virtual meeting or an ambiguous emoji, can trigger downgrades or social exile. Participation becomes the price of access to work, school, and social belonging, making emotional conformity a form of economic control.
Recent innovations further entrench this model. Meta’s standalone AI app, launched in April 2025, integrates with platforms like WhatsApp, Instagram, Facebook, and Messenger (
Meta, 2025). Powered by the LLaMA 4 model, the app offers personalized interactions, including voice capabilities and real-time memory, creating a more intimate and continuous user experience (
Meta, 2025;
Udayshankar, 2025). The app’s “Discover” feed encourages sharing of AI interactions, raising concerns about privacy and commodification of personal experiences.
Business Insider (
2025) characterizes Meta AI’s public feed as “the saddest place on the internet,” underscoring the emotional toll and depressive effects these algorithmically curated environments can impose on users.
Meta’s hAI! Friend MR app for Meta Quest delivers AI companions designed for emotional support and personalized interaction in VR and AR spaces (
Udayshankar, 2025). These companions blur the line between genuine human connection and algorithmic affect while being subtly puppeteered by Meta’s corporate marketing machinery. Users are nudged to deepen their emotional investments through monetized interactions (such as gifting, virtual “sugar baby” dynamics, or Valentine’s Day offerings to their AI romantic partners) transforming intimacy into a platform for continuous consumer extraction.
3.2.3. The Meta Human: Critical Analysis
This future signals a profound shift: AI companions are no longer mere tools but emotional actors within a surveillance economy. Meta’s orchestration of affect commodifies and recycles human subjectivity itself into platform capital, encouraging addictive relational loops that generate profit. This echoes
Schafheitle et al.’s (
2020) warnings about datafication reshaping social control and
Morozov’s (
2013) critique of techno-solutionism masking systemic power imbalances. It also resonates with
Zuboff’s (
2019) concept of surveillance capitalism, which details how behavioral futures are predicted, modulated, and sold back to users, transforming intimate experiences into data assets. Complementing
Berlant’s (
2011) focus on affective attachments and political optimism,
Terranova’s (
2004) work on digital labor highlights how emotional and cognitive effort is exploited under neoliberal regimes. Further enriching this analysis,
Sedgwick (
2003) and
Ahmed (
2004) explore the cultural circulation of affect and emotion, while
Massumi (
2002) and
Seigworth and Gregg (
2010) illuminate the nonconscious dynamics that animate affective politics.
Hardt and Negri (
2000) connect these affective economies to broader global neoliberal power structures, and
Probyn’s (
2005) examination of vulnerability and shame adds depth to understanding how affective life is regulated within corporate platforms.
3.3. The Google Human
The Google Human navigates a world shaped by predictive systems that anticipate needs, behaviors, and decisions, embedding anticipatory governance into daily life.
3.3.1. Present: Predictive Platforms and Everyday Navigation
The Google Human lives within a seamless web of services (i.e., Gmail, Maps, Search, Assistant, Calendar) all synchronized to anticipate and guide action. Google’s infrastructure captures queries, habits, locations, and timing patterns, creating an up-to-date behavioral map of each user. This predictive capacity transforms everyday conveniences into deep forms of behavioral governance.
Zuboff (
2019) identifies this as the extraction of behavioral surplus, where user data is repurposed to predict, and eventually preempt future behavior. What begins as personalized assistance becomes algorithmic nudging: suggested routes, autocomplete phrases, calendar prompts, and targeted ads all guide the user toward profitable outcomes.
Parisi (
2013) explains this as computational rationality, where decision-making is refracted through pre-processed models that make acting otherwise increasingly improbable.
Google is a planetary infrastructure as part of a multi-layered system of sovereignty that reshapes civic, geographic, and perceptual realities (
Bratton, 2016). Journalistic and long-form reporting underscore this shift.
Wired (
Ratliff, 2007) and
Public Seminar (
Wark, 2015) document how Google Maps reshapes human geography, redefining not just how people move but how they conceptualize space. Google’s AI-driven suggestions in Gmail and Android predict actions and emotional tone (
Jakesch et al., 2023), while critics warn of overreach in emotion recognition and mental health detection (
Coldwell, 2021;
Webb & Goel, 2025).
3.3.2. Future: Algorithmic Captivity in The Stack
Imagine living as the Google Human in a future where every step, every choice, and every interaction is pre-judged by opaque algorithms steeped in racial bias and corporate control. You face digital redlining. Not through explicit laws but via coded discrimination embedded deep within platforms that shape your access to jobs, housing, healthcare, and essential services. Your neighborhood is algorithmically gerrymandered: surveillance cameras and sensors watch your every move, while predictive policing algorithms mark you as a future threat, ushering in preemptive incarceration before any crime is committed.
Much like travelers once guided by
The Negro Motorist Green Book (
Green, 1949), you navigate a fractured cityscape where “safe” spaces are algorithmically gated, leaving you confined to digital and physical enclaves deemed acceptable by corporate-state powers. Your identity, reduced to data points, is constantly scored and ranked—your digital shadow controlling everything from the ads you see to whether you qualify for social programs or bail.
This fractured lived experience unfolds across
Bratton’s (
2016) planetary-scale computational megastructure; The Stack, which enforces racialized control at every level:
Earth layer: The physical environment sustaining this system is violently extracted, disproportionately burdening marginalized communities whose land and resources power the infrastructure that oppresses them. Environmental degradation and toxic waste perpetuate histories of ecological racism, grounding digital sovereignty in real-world extraction and violence (
Bullard, 2021;
Mohai et al., 2019;
NASEM, 2024;
Ross et al., 2022;
Schafheitle et al., 2020).
Cloud layer: Google’s centralized data centers hoard your personal information, transforming it into behavioral futures markets. Predictive models systematically assign greater risk to Black and Brown bodies, digitally recreating the boundaries of historic redlining through biased machine learning (
Mohai et al., 2019;
Raji & Buolamwini, 2019;
Zuboff, 2019). Your potential actions become commodities traded in opaque marketplaces, defining your possibilities before you act (
Morozov, 2013).
City layer: Smart urban systems enforce digital apartheid. Sensors, cameras, and biometric scanners partition the city into surveilled zones, algorithmically gerrymandering neighborhoods by social desirability and predicted compliance (
Kitchin, 2020;
Monahan, 2021;
Sadowski, 2020). You navigate an algorithmically curated cityscape restricting your mobility and access.
Interface layer: Google’s platforms regulate your speech and social interactions. Content moderation algorithms nudge conformity and silence dissent, erasing marginalized voices while amplifying sanitized narratives that serve corporate and state interests (
Gillespie, 2020;
Roberts, 2019;
Tufekci, 2018). Interfaces become tools of political and cultural control (
Pasquale, 2015;
Schafheitle et al., 2020).
User layer: Your subjectivity is fragmented into quantifiable metrics (
Lupton, 2016). Autonomy is hollowed out as your behavior is predicted, scored, and regulated. Freedom becomes conditional, contingent on algorithmic approval (
Tufekci, 2018). Sovereignty, once a claim of self-determination, erodes into a state of managed participation without consent (
Gillespie, 2020).
In this world we find ourselves in, the Google Human lives under anticipatory governance where algorithms not only predict behavior but enforce it, shaping social life with chilling precision. Algorithmic sovereignty replaces people and states as the ultimate arbiters of inclusion and exclusion. Access to resources, mobility, and freedom are rationed by invisible lines drawn in code rather than law, perpetuating a digital reincarnation of racialized governance practices like redlining and gerrymandering.
The ghosts of that earlier era persist not in printed guides but encoded within opaque algorithms deciding who moves, who belongs, and who is confined. Within The Stack, the Google Human is trapped inside a planetary computational apparatus surveilling, controlling, and preemptively punishing along lines of race and class, turning autonomy into a myth and liberty into an algorithmically engineered performance.
3.3.3. The Google Human: Critical Analysis
The figure of the Google Human reveals a cybernetic intensification of interlocking systems of oppression through anticipatory governance. Predictive algorithms operate as adaptive feedback loops that continuously monitor, regulate, and constrain human behavior across multiple infrastructural layers, generating recursive cycles of control and resistance (
Dean, 2002;
Lockhart, 2025b;
Schafheitle et al., 2020). This intensification embeds racialized and class-based power asymmetries into the core of digital architectures, transforming autonomy into a managed performance within algorithmic regimes.
Fundamentally reconfigured by predictive computation, autonomy is subsumed into systems that anticipate, nudge, and ultimately constrain decision-making. Computational rationality describes how individual choices become increasingly preprocessed by algorithmic models, narrowing possibilities and making deviation structurally improbable (
Liao & Holz, 2025). Behavioral surplus extraction converts intimate human practices into data commodities, extending neoliberal logics into everyday life (
Dean, 2010;
MedTech Pulse Staff, 2022;
Zuboff, 2019).
The Stack is critical here for understanding how these layers interlock to produce the Google Human as a subject embedded in a planetary computational megastructure. The Earth layer’s violent extraction reproduces environmental racism, disproportionately burdening marginalized communities and establishing a material base for digital sovereignty (
Bullard, 2021;
Mohai et al., 2019;
NASEM, 2024;
Ross et al., 2022). The Cloud layer’s centralized data centers and opaque algorithmic futures markets perpetuate digital redlining, encoding racial bias into machine learning systems that reinscribe segregation in computational terms (
Benjamin, 2019;
Guardian Staff, 2012;
Pasquale, 2015;
Zuboff, 2019). These biases are structurally embedded, systemically disadvantaging Black and Brown bodies (
Garvie et al., 2016;
Raji & Buolamwini, 2019).
The City layer partitions urban space into surveilled enclaves controlled through predictive policing and smart infrastructure technologies. Algorithmic gerrymandering creates digital apartheid zones that gatekeep safety and opportunity, restricting mobility and access (
Green, 1949;
Benjamin, 2019;
Raji & Buolamwini, 2019). The Address layer compounds injustices with biometric misidentifications and pre-crime incarceration algorithms that punish anticipated futures rather than past actions, extending state control through data-driven anticipation (
Garvie et al., 2016;
Monahan, 2021;
Raji & Buolamwini, 2019).
At the Interface layer, platforms mediate social interaction via content moderation and algorithmic curation, silencing dissenting and marginalized voices while amplifying sanitized narratives aligned with corporate and state interests (
Brayne, 2021;
Dean, 2002;
Gillespie, 2020). These processes function as technologies of political and cultural control, shaping public discourse and eroding democratic deliberation. The User layer reduces subjectivity to quantifiable metrics via behavioral scores, affective captures, and predictive profiles that hollow out autonomy and condition participation within algorithmic power structures (
MedTech Pulse Staff, 2022;
Schafheitle et al., 2020;
Zuboff, 2019).
Together, these layers reveal that Google’s platform infrastructures do not merely facilitate or enhance life; they actively incubate, govern, and commodify it. The Google Human becomes a site of managed, datafied existence where freedom depends on compliance with algorithmic governance (
Dean, 2010;
Green, 1949;
Guardian Staff, 2012;
Zuboff, 2019). This process rearticulates liberal sovereignty as a cybernetic mechanism of control, shifting the locus of power from states to techno-conglomerates that invisibly govern inclusion and exclusion (
Crawford & Paglen, 2021;
Green, 1949;
Lockhart, 2025b). Cybernetic feedback loops embedded within anticipatory governance systems intensify structural inequalities by preempting behaviors and shaping lived realities through computational rationality and behavioral commodification. The Google Human lives constrained agency, simultaneously empowered and trapped by the planetary megastructure.
4. Conceptual Reflection: Platform Sovereignty and the humanOS
4.1. Attention and Affect Extraction
The contemporary human operating system (humanOS) is embedded within a sprawling platform ecosystem whose power extends far beyond mere technology. It functions as a comprehensive apparatus of attention and affect extraction, where human desires, emotions, and social interactions are systematically harvested and converted into commodified data streams. Platforms deploy algorithmic architectures specifically designed to capture and monetize user attention at scale, feeding complex cross-industry markets spanning advertising, healthcare, finance, and beyond. This dynamic enmeshes users within an endless cycle of consumption and engagement, where affect becomes a resource mined for profit, blurring the boundaries between subjectivity and capital.
Apple’s systematic modularization and architectural segmentation of its operating system ecosystem exemplify strategic platform sovereignty. Internal terminology such as
homeOS and
audioOS, revealed through job listings and developer beta leaks (
Mayo, 2021), reflects a deliberate partitioning of OS functionalities tailored to discrete device categories and use cases. This segmentation facilitates granular governance over user interaction modalities and data flows, aligning with the 2025 adoption of year-based OS versioning (
Cunningham, 2025). Such structural differentiation underscores how platform operators engineer affective and attentional economies through vertically integrated, branded software infrastructures that enable precise control over user environments and behaviors.
4.2. Socioeconomic Stratification and Unequal Access
Crucially, this attention economy intersects with socioeconomic stratification. Access to enhancement technologies, datafied infrastructures, and biometric governance is unevenly distributed, reproducing and exacerbating existing social inequalities. Those with greater economic capital gain privileged entry into the branded transhumanist futures, while marginalized communities face exclusion or coerced participation under surveillance regimes that amplify precarity. Thus, the humanOS is not a neutral or universal platform but one marked by hierarchical inclusion and exclusion based on class, race, gender, ability, and other axes of difference.
4.3. Cross-Industry Consumer Marketing and Corporate Ecosystems
At the core of this ecosystem lies a relentless consumer marketing logic, which incentivizes corporate stakeholders across multiple industries to align their interests. Health tech companies, insurance providers, wellness startups, and social media platforms form symbiotic relationships to capture value from user data and behavioral patterns. This cross-industry stakeholder network fuels an expansive marketplace where bodily and cognitive enhancement are not only individual choices but tightly bound to ongoing consumerism, subscription models, and proprietary ecosystems that lock users into corporate control.
4.4. Branding, Brand Loyalty, and Kinship Dynamics
An extension of this marketing logic is the strategic use of branding and brand loyalty, which profoundly shapes social relations, kinship, and heritage. Platforms and corporations cultivate generational brand attachments, effectively locking entire families into lifelong ecosystems of consumption and data extraction. This branding goes beyond products or services; it determines who can be your friends, your social circle, and even who can be recognized as kin. By embedding familial and social identity within branded environments, corporations enforce generational entrapment, extracting wealth, data, and loyalty across time, thus reproducing social hierarchies and corporate dominion through the fabric of personal and collective identity.
4.5. Democratic Erosion and Corporate Governance
Such platform sovereignty poses profound threats to democratic life. The collapse of democracy emerges as these corporate platforms, operating with minimal regulatory oversight, mediate public discourse, shape political imaginaries, and surveil dissident voices. The privatization of critical infrastructures results in policy capture, where corporate interests supersede public good. This undermines collective deliberation and accountability, eroding the conditions necessary for meaningful democratic participation. As political power migrates into opaque algorithmic systems, the future of self-governance becomes precarious.
4.6. Discrimination, Eugenics, and Algorithmic Policing
Embedded within this dystopian vision are stark discriminations of bodies and identities. The ideal datafied subject promoted by transhumanist and platform logics embodies racialized, gendered, able-bodied, and normative ideals. Algorithmic biases reinforce systemic exclusions by shaping who gains access to enhancements and whose data is deemed valuable. This perpetuates historical patterns of marginalization and exclusion, now mechanized through coded infrastructures that invisibly police and discipline bodies that deviate from corporate norms.
This dynamic extends into a contemporary form of eugenics, where datafication and enhancement converge to engineer an ideal subject optimized for market efficiency and governance. Biometric monitoring, predictive analytics, and algorithmic decision-making collectively enforce standards of productivity, health, and comportment that echo exclusionary logics of bodily perfection and social worth. The human is no longer a sovereign individual but a calibrated entity molded to fit neoliberal imperatives, with profound ethical and political implications for autonomy and justice.
4.7. Access, Predictive Control, and Intergenerational Impact
Access also defines life trajectories in this ecosystem. Control over access to information, employment, and opportunity is intimately tied to one’s branded identity and data profile. When an individual is branded, for example, as a thief or otherwise socially marginalized, these labels cascade down generations. Their children become subjects of predictive control regimes that preemptively monitor and restrict them based on presumed criminality or deviance. This form of algorithmic policing extends beyond individuals to entire families, effectively criminalizing and limiting the futures of descendants through datafied stigmatization, entrenching cycles of exclusion and social control.
4.8. Policing Through Biometric and Algorithmic Infrastructures
Policing of subjects occurs both through overt surveillance and more insidious infrastructural controls embedded in everyday technologies. Corporate platforms enforce compliance by regulating biometric data streams, shaping behavioral feedback loops, and constraining participation in digitally mediated social spaces. This governance regime blurs the lines between state and corporate power, embedding disciplinary mechanisms that monitor, classify, and correct individuals within a panoptic network that stretches from social media to health trackers to workplace monitoring systems.
5. Theoretical Foundations for Posthumanist Alternatives and Relational Futures
The critique of transhumanism as a neoliberal governance regime requires more than deconstruction; it demands a theoretical reorientation capable of proposing alternatives. While the preceding sections diagnosed the embeddedness of enhancement logics in platform capitalism, this section seeks to illuminate conceptual resources for resisting the optimization, commodification, and individuation that dominate current technological imaginaries. Drawing from posthumanist, cybernetic, ecological, and critical theoretical traditions, this framework constructs a basis for what might be called relational technogenesis: a situated, multispecies, and anti-proprietary reconfiguration of subjectivity and technology.
The aim is not to articulate a universal counter-program but to trace emergent possibilities for becoming-with in entangled techno-social worlds. Posthumanist alternatives unsettle the foundational dualisms (i.e., human/machine, self/other, nature/technology) that underwrite transhumanist ideologies. In their place, they offer frameworks for understanding subjectivity as co-constituted, distributed, and ecologically embedded. This section synthesizes these traditions not merely as critique, but as constructive interventions in how futures might be imagined, designed, and inhabited.
5.1. Platform Capitalism and the Ideological Production of Enhancement
Transhumanist narratives operate within and are amplified by platform capitalism’s structural logics. Far from being neutral facilitators, platforms are sites of epistemic and infrastructural power that script the conditions of subject formation and futurity.
Morozov (
2013) critiques the reduction of sociopolitical issues to technical problems solvable by innovation, a hallmark of what he calls techno-solutionism. This ideology frames enhancement as both necessary and inevitable, concealing the political economies and asymmetries of access embedded in such narratives.
Barbrook and Cameron’s (
1995)
Californian Ideology reveals how transhumanism is rooted in a contradictory blend of neoliberal individualism and countercultural libertarianism. This fusion enables the commodification of transcendence, positioning enhanced humans as products of market rationality rather than subjects of political deliberation.
Bratton’s (
2016) The Stack situates these developments within a broader topology of global computational infrastructures, highlighting how governance, territory, and computation converge to produce a scalable model of techno-subjectivity.
Within this model, the subject becomes less a citizen than a user simultaneously surveilled, responsibilized, and monetized. These frameworks reveal that the future human is not merely imagined but engineered, incubated in corporate infrastructures, and optimized for capital accumulation. Enhancement becomes a regulatory mechanism, channeling desire, embodiment, and identity through platform protocols and economic imperatives.
5.2. Posthumanist Ontologies and the Decentering of the Human
To counter this, posthumanist theory offers ontological alternatives that displace human exceptionalism and reconceive subjectivity as distributed and relational, emphasizing the dynamic entanglement of bodies, environments, and material forces (
Grosz, 2008).
Haraway’s (
1991) cyborg challenges both essentialist and dualist constructions, proposing a hybrid ontology that collapses boundaries between organism and machine. Her later work (
Haraway, 2016) develops the idea of becoming-with, an ethics of situated entanglement that foregrounds interdependence over autonomy.
Braidotti (
2013) similarly critiques the anthropocentric foundations of humanism but advances an affirmative posthuman ethics grounded in zoe, or the transversal force of life shared across species. For Braidotti, the posthuman subject is not a disembodied mind uploaded to the cloud, but a material, affective, and embedded being-in-relation. The ethical imperative shifts from control to accountability, from optimization to co-existence.
Latour’s (
1993) actor-network theory supports this view by refusing to separate the social and the technical. Agency is reframed not as the property of individuals but as an emergent property of assemblages.
Morton (
2013) further expands ecological thought with the concept of hyperobjects, entities so massive in scale (e.g., climate systems, infrastructure) that they elude complete comprehension, yet intimately shape lived experience.
These theoretical moves challenge the ontological isolationism that underwrites the transhumanist project. What emerges from these thinkers is a relational ontology that understands being not as given but as produced through interdependent processes across species, platforms, ecologies, and timescales. Such a model invites a more situated and accountable politics of technological design and use.
5.3. Algorithmic Governance and Infrastructural Subjectivity
The transhumanist subject is shaped not only by ideology but by infrastructural governance: the algorithmic systems that mediate affect, behavior, and self-conception.
Dean (
2002,
2010) shows how neoliberal rationality operates through digital feedback loops that responsibilize individuals while obfuscating structural conditions. Subjectivity is modulated through metrics, rankings, and algorithmic nudges, aligning inner life with platform logics.
Pasquale (
2015) critiques the opacity of these systems, which he characterizes as “black boxes” that entrench power differentials under the guise of neutrality.
Schank (
2019,
2020) argue that algorithmic infrastructures commodify not only data but behavior itself, converting ethical and affective practices into marketable outputs.
Lupton (
2016) demonstrates how self-tracking cultures exemplify this logic, extending disciplinary mechanisms into the microtemporal rhythms of everyday life.
This regime produces what might be called infrastructural subjectivity: a mode of being shaped through continuous calibration with platform protocols. Rather than transcending the human, the transhumanist subject becomes an intensification of the neoliberal human: self-optimizing, datafied, and enmeshed in a machinic economy of value extraction.
5.4. Toward a Theory of Relational Technogenesis
To challenge the logic of technological enhancement rooted in individuation and optimization, it is necessary to reimagine how subjectivity and technology co-emerge. This section introduces relational technogenesis as a conceptual alternative to both the transhumanist notion of technological mastery and the deterministic view of technical systems as autonomous agents of change. Drawing from philosophy, cybernetics, posthuman ethics, and language theory, relational technogenesis foregrounds the entangled, co-constitutive processes through which human and nonhuman agents (e.g., biological, machinic, infrastructural) develop together. Rather than viewing technology as an external tool or as an oppressive force acting upon passive human subjects, this framework positions technogenesis as an ongoing, situated becoming shaped through mutual adaptation, feedback, and ethical entanglement.
At the foundation of this model lies
Bateson’s (
1972/2000) cybernetic epistemology. Bateson conceptualized systems as open, recursive networks that learn, adjust, and self-regulate through feedback rather than linear causality. Feedback loops, in this sense, are not just mechanisms of control or regulation but also sites of meaning-making, improvisation, and error. This view enables a critical departure from the instrumental rationality embedded in transhumanist discourses, inviting us to see technological development as a dialogic process emergent, ecological, and irreducible to predictive modeling. Bateson’s perspective resists the reduction of complex human–technological relationships to closed systems of command and response, instead urging us to consider how living systems and technical systems co-adapt in dynamic, often unpredictable ways.
Hayles (
1999) later builds on this cybernetic model by theorizing feedback as a constitutive mechanism in the formation of posthuman subjectivity, emphasizing its embodied and material dimensions.
Grosz (
2008) further contributes to this understanding by framing subjectivity and materiality within chaotic, emergent processes where the earth itself is an active participant in the ongoing formation of life and meaning.
A parallel insight can be found in
Wittgenstein’s (
1953/2009) theory of language games, which underscores that meaning is not intrinsic but generated through contextual and social use. Just as linguistic meaning arises within specific forms of life, so too does technological meaning and function emerge through practice, interaction, and cultural embedding. This challenges any universalizing framework of enhancement or innovation, as the significance and application of technology are always mediated by the relational ecologies in which they are deployed. By bringing Wittgenstein into conversation with cybernetics, we gain a vision of technogenesis that is both epistemically modest and open-ended one that privileges situatedness, improvisation, and contingency over abstraction and control.
This approach also resonates with
Haraway’s (
1991,
2016) insistence on situated knowledge and her reconfiguration of the subject as a cyborg, an always-partial, boundary-crossing entity formed through affective, material, and technological entanglements. For Haraway, the cyborg is not merely a metaphor but a political figure that resists the sovereign subject of liberal humanism and its technological heirs in transhumanist fantasies. Her conception of becoming-with offers a feminist and anti-exceptionalist framework for relational technogenesis, in which subjectivity arises not from autonomy but from intra-action—
Barad’s (
2007) term for the mutual constitution of entangled agencies, a view also reflected in
Hayles’s (
1999) posthuman subject, co-formed through recursive coupling with intelligent systems, and
Grosz’s (
2008) emphasis on emergent materiality and chaotic processes as foundational to becoming. In this model, neither the subject nor the machine precedes their relation; rather, each emerges through the other in an unfolding ontogeny of ethical responsiveness.
These theoretical resources collectively reject the dualisms that anchor transhumanist thought: human/machine, natural/artificial, mind/body. Instead, relational technogenesis locates the becoming of both subject and system within assemblages of co-constitution and negotiation. Technological infrastructures are thus not passive environments nor merely tools of enhancement but ontogenetic conditions—material-discursive processes that shape, and are shaped by, evolving forms of life. Technologies do not merely extend human capabilities; they also participate in generating what it means to be human, or more accurately, posthuman, within any given socio-technical milieu.
In synthesizing these insights, relational technogenesis emerges as a counter-framework to techno-sovereignty. It foregrounds interdependence over mastery, emergence over prediction, and ethics over efficiency. It invites us to theorize technology not as a ladder to transcendence but as a partner in co-evolving ecosystems of sense, care, and accountability. Positioned against the extractive and individuating tendencies of transhumanism, relational technogenesis lays the groundwork for technopolitical futures grounded in responsiveness, multiplicity, and situated co-becoming.
5.5. Reimagining Techno-Subjectivity as Collective Becoming
The prevailing transhumanist narrative positions the enhanced human as a branded, commodified subject—a “corporate offspring” engineered within neoliberal frameworks of optimization, market capture, and proprietary control (
Dean, 2002,
2010;
Fiesler, 2020;
Lindtner et al., 2014;
Pasquale, 2015;
Sadowski, 2020;
Schank, 2019,
2020). This neoliberal cybernetic subject is deeply entangled in digital infrastructures designed to capture data flows, modulate behaviors, and channel desires toward corporate ends (
Dean, 2002,
2010;
Duffield, 2017;
Muellerleile & Robertson, 2018;
Schank, 2020;
Törnberg, 2023). In contrast, posthumanist and critical theory offer conceptual tools to reimagine techno-subjectivity as relational, contingent, and situated within broader ecological and socio-technical networks (
Dean, 2002,
2010;
Hayles, 1999;
Lindtner et al., 2014;
Schank, 2019). Haraway’s cyborg resists neat categories and corporate branding by embodying hybridity and multiplicity—an emblem of resistance that is neither fully human nor fully machine, but something politically potent in its in-between-ness (
Haraway, 1991,
2016).
Braidotti’s (
2013) affirmative ethics pushes this further by advocating for a posthuman subjectivity that embraces zoe, which is the vital, shared force of life beyond human exceptionalism, and foregrounds interspecies kinship and ecological accountability. This stance challenges enhancement’s dominant logic of individual mastery and control, instead orienting toward collective flourishing, attuned to the material and environmental flows shaping life itself (
Grosz, 2008).
Bratton’s (
2016) layered model of
The Stack situates subjectivity within geopolitical and infrastructural assemblages, highlighting how techno-capital flows intersect with governance and territory. This complex, multi-dimensional argument encourages us to see the posthuman not as isolated individuals but as nodes within complex networks that include environment, code, policy, and capital.
Moreover,
Bateson’s (
1972/2000) cybernetic insights remind us that systems operate through feedback loops and recursive relations, where subjectivity and technology co-evolve within ecological and informational contexts that cannot be fully controlled or predicted. Wittgensteinian language games (
Wittgenstein, 1953/2009) complement this by underscoring the social and contingent nature of meaning-making itself, opening space for emergent identities and practices beyond fixed categories.
Together, these frameworks refuse to confine humans to mere data points or branded commodities. They invite us into open-ended interfaces of becoming with others, both human and nonhuman, alongside technology, all entwined within shifting and unpredictable assemblages. This unfolding future breathes with relationality and transformation, weaving the endless fabric of life’s creation.
5.6. Toward Open and Multispecies Futures
This reimagined future rejects the monopolization of enhancement technologies by corporate interests and the neoliberal imperative to optimize the individual as a competitive asset. Instead, it emphasizes:
Collective agency: Subjectivity as co-produced and distributed, dissolving the autonomous self into relational networks of care, mutuality, and responsibility (
Dean, 2002,
2010;
Lindtner et al., 2014).
Multispecies kinship: An ethics and ontology that recognize interdependence with nonhuman beings and ecological systems, challenging anthropocentrism and fostering stewardship rather than domination (
Barbrook & Cameron, 1995;
Dean, 2002,
2010).
Technological pluralism: Technologies as tools for emancipation and experimentation rather than instruments of surveillance and control, aligned with community values and ecological sustainability (
Dean, 2010;
Schank, 2019).
Critical infrastructural awareness: Recognizing the geopolitical and economic layers shaping digital platforms and techno-futures, thus enabling strategic interventions and alternative forms of governance (
Edwards, 1996;
Haraway, 1991;
Lindtner et al., 2014).
This vision draws from posthumanist ethics and political theory, cybernetic systems thinking, and STS critiques of technology to articulate futures where enhancement is decoupled from commodification and individualism, oriented instead toward shared flourishing, resilience, and emergent forms of life.
5.7. From Theory to Practice: Design, Governance, and Ecological Accountability
The theoretical reframing of technogenesis as relational and co-constitutive compels a corresponding shift in technological design, governance, and political imagination. If enhancement under transhumanism enacts neoliberal logics of optimization, individuation, and control, then alternatives grounded in posthumanist ethics and cybernetic reflexivity demand technologies and infrastructures that sustain collective, ecological, and multispecies forms of subjectivity.
Design must move beyond efficiency and control to embrace relationality, situatedness, and ethical accountability.
Haraway’s (
1991) earlier work laid the foundation for situated knowledges and becoming-with as frameworks resisting liberal humanist subjectivity. More recently,
Haraway (
2016) builds on these foundations by foregrounding the political and ecological urgencies of multispecies entanglements and kin-making as essential to navigating the ruins of late capitalism and platform monopolies.
She expands the ethical scope to emphasize the entanglement of multispecies relationships with technology design, calling for approaches that recognize the material and affective realities of diverse life forms within techno-social environments. Likewise,
Braidotti’s (
2013) planetary ethics grounded in zoe-egalitarianism calls for technologies that serve flourishing across human and nonhuman life, rather than reinforcing anthropocentric hierarchies. This means designers and engineers must engage epistemic pluralism, foreground ambiguity, and resist universalizing frameworks that prioritize scalability and optimization above all else.
Building on these perspectives, I have previously contributed a distinct intervention by focusing on the cultural frameworks necessary for feminist solidarity and collective care in social systems (
Lockhart, 2025a). While my work primarily addresses relationality within social and psychotherapeutic contexts, it underscores the importance of care, mutual responsibility, and attentiveness to power dynamics. These principles can inform ethical governance and design protocols beyond purely technological domains.
Infrastructures themselves are moral and political actors, not merely technical artifacts. Scholars in STS, such as
Fiesler (
2020) and
Lindtner et al. (
2014), emphasize that design decisions encode normative assumptions and shape socio-technical relations.
Parisi’s (
2013) work on automated cognition further highlights how nonconscious computational processes influence perception and subjectivity at a pre-reflective level, underscoring the urgency of integrating ethical reflexivity into algorithmic design. Technologies built with this awareness prioritize opacity, multiplicity, and responsiveness, encouraging users and communities to engage with complexity rather than succumb to reductive surveillance and control.
Governance frameworks must also be reimagined to move away from technocratic managerialism and proprietary enclosure toward participatory, collective models. Drawing on platform accountability literature and democratic theory, this entails multi-stakeholder deliberation, transparency, and institutional openness. Mechanisms such as data trusts, community-owned infrastructure, algorithmic audits, and open-source ecosystems demonstrate how governance can become a shared relational obligation rather than a corporate entitlement (
Pasquale, 2015;
Raji & Buolamwini, 2019). Such models challenge asymmetries of power embedded in platform capitalism and open possibilities for infrastructural interventions aligned with social and ecological justice.
Bateson’s (
1972/2000) cybernetic framing of feedback as ecological, communicative, and adaptive rather than instrumental offers a foundation for these shifts. Rather than treating feedback loops as mechanisms of control and efficiency, this invites us to imagine socio-technical systems as sites of reflexivity, error, and mutual learning. When combined with a commitment to epistemic pluralism and relational ethics, this leads to visions of socio-technical assemblages governed through co-regulation, ethical sensitivity, and collective accountability (
Lockhart, 2025b).
5.8. Embodied Praxis and Counter-Infrastructures
Crucially, these theoretical imperatives are already informing practical initiatives that enact posthumanist principles. Open-source software communities organized around mutual aid and anti-extractivist ethics, community-owned broadband cooperatives resisting platform monopolies, and multispecies urban design collectives involving nonhuman stakeholders exemplify this emerging praxis. These initiatives are not idealized utopian fantasies but concrete infrastructures of refusal and proposition that embody counter-logics to dominant platform futures (
Fiesler, 2020;
Haraway, 1991;
Lindtner et al., 2014;
Schank, 2019).
Haraway’s (
2016)
Staying with the Trouble reorients posthumanist praxis by centering multispecies kinship, ecological interdependence, and the ethics of becoming-with. She reframes technological and social infrastructures as inseparable from the living, affective agencies they sustain. Complementing Haraway’s ecological vision,
Braidotti’s (
2022)
Posthuman Feminism offers a critical posthumanism that foregrounds feminist theory as embedded within the material and ideological shifts shaping our global condition. Braidotti argues that feminist movements, especially those rooted in Black, Indigenous, queer, and ecofeminist traditions, are foundational to posthumanism’s critique of neoliberal capitalism, anthropocentrism, and technocentrism.
Building on these insights, I (
Lockhart, 2025a) extend both Haraway and Braidotti by situating feminist solidarity and Indigenous relationality at the heart of embodied praxis. Drawing from systemic psychotherapeutic and Indigenous cultural frameworks, I propose that collective care and multispecies solidarity are not only ethical imperatives but also vital infrastructures of shared becoming. Together,
Haraway (
2016),
Braidotti (
2022), and my work (
Lockhart, 2025a) articulate a synergistic vision: infrastructures that sustain collective life must be rooted in relational care, multispecies accountability, and feminist ethics offering pathways that resist commodified, enclosure-oriented technological futures.
In sum, from theory to practice, the shift from transhumanist intensifications of enhancement to a model of relational technogenesis calls for technologies and governance models that sustain collective life, engage complexity, and prioritize ecological accountability. This posthumanist vision is grounded in:
Haraway (
1991,
2016), who reconceptualizes subjectivity through cyborg ontology and multispecies kinship, advocating for relational ethics and situated knowledge in resisting commodified, technocentric logics
Braidotti (
2013,
2022), who develops an affirmative, zoe-centered posthuman ethics and positions feminist, anti-anthropocentric frameworks as central to resisting neoliberal and technocratic domination
Hathaway (
2020), who critiques neoliberalism as a regime of corporate power, helping expose the structural entanglements of technological development, market logic, and platform infrastructures
Lockhart (
2025a), who extends these interventions by centering feminist solidarity, Indigenous relationality, and systemic psychotherapeutic perspectives as practical frameworks for embodied, collective care within posthumanist infrastructures
Together, these thinkers offer a coherent foundation for designing and governing technologies not as tools of enhancement or enclosure, but as entangled processes of co-becoming. This means designing for mutual responsiveness rather than control, fostering participatory governance rather than corporate capture, and cultivating multispecies flourishing rather than narrow human optimization. By embedding these principles, we can open pathways toward techno-social futures that resist commodification and re-center care, responsibility, and shared becoming at their core.
6. Future Research Directions, Interdisciplinary Dialogues, and Limitations
While theoretical elaboration and localized practices provide essential groundwork, sustained transformation demands an expansion of scholarly, methodological, and political horizons. The critique of transhumanist subjectivity and the elaboration of relational futures must continue through rigorous interdisciplinary work that bridges philosophy, political economy, sociology, ecological science, and critical design.
At the level of theory, further integration is needed across traditions in posthumanist ethics, affect theory, political economy, and systems ecology. These convergences can help illuminate how techno-subjectivity is constituted not only through interface or ideology, but through complex assemblages involving labor, code, infrastructure, cognition, and affect. Methodologically, this calls for research approaches capable of addressing multi-scalar and multi-agent phenomena.
Bratton’s (
2016) The Stack exemplifies this by rendering visible the vertical layering of computation, governance, and subjectivity. Combining ethnographic inquiry with computational analysis, participatory action research, and speculative design offers a way to surface the lived experience of branded selfhood and algorithmic governance, while also prototyping alternative futures.
Future research must also engage with the emerging edges of technological development. Synthetic biology, neural interface design, and AI-mediated cognition are not neutral tools; they are ideological apparatuses with world-shaping consequences. Rather than responding to these developments after the fact, researchers must intervene at the level of their initial framing, critically examining the assumptions, values, and political formations that guide their design. This includes asking not only “Who benefits?” but also “What forms of life are being made possible or foreclosed through these systems?”
Equally urgent is the task of mapping emergent forms of resistance and refusal. Digital disconnection movements, platform cooperativism, indigenous data sovereignty, queer and disabled reimaginings of embodiment, and multispecies justice activism offer epistemic and political resources for thinking and acting otherwise. These are not merely critiques; they are world-building practices that prefigure the futures we hope to create.
Finally, collaborations between academic and activist communities must be strengthened. The technopolitical crises of our time, including surveillance capitalism and ecological collapse, demand not only theoretical sophistication but also public engagement, institutional transformation, and strategic coalition-building. The stakes involved are infrastructural, ontological, and existential.
Summary
Alternative imaginaries grounded in posthumanist thought, cybernetic reflexivity, and socio-technical relationality offer new ways to understand technological becoming not as enhancement but as entanglement. Transhumanism does not break from humanism; rather, it intensifies its most exclusionary premises. Its vision of subjectivity as rational, sovereign, and market-aligned is not inevitable. Posthumanist theorists such as Haraway, Braidotti, and Parisi provide conceptual tools to disrupt this imaginary by foregrounding hybridity, multispecies kinship, and distributed cognition. Bateson, Bratton, and I offer ecological and infrastructural logics that reorient feedback, agency, and governance toward collective adaptability and systemic care.
These frameworks open the possibility of designing technologies not for control or optimization but for relational flourishing. They establish a foundation upon which ethical, political, and technical interventions can be co-developed through participatory governance, critical design, and technosocial experimentation. The political implications of this theoretical reorientation are profound. What forms of resistance, regulation, and imagination are necessary to break the grip of techno-libertarianism and neoliberal futurism? What does it mean to reclaim the future not as a branded upgrade but as a collective responsibility?
7. Conclusions: Beyond the Branded Human—Toward Relational–Ecological Futures
Transhumanism, far from heralding a liberatory posthuman future, operates as an intensified extension of liberal humanism enmeshed within neoliberal capitalism. The enhanced human is not an emancipated subject but a corporately branded, datafied entity optimized for market participation and governed by infrastructures of surveillance and control. This cybernetic subject is bound to platform capitalism’s logic of commodification, responsibilization, and algorithmic governance, where autonomy is redefined as continuous self-optimization within corporate ecosystems. Promises of transcendence and liberation are inseparable from neoliberal market imperatives and the privatization of embodied existence. In contrast, posthumanist critiques emphasize relationality, hybridity, and ecological embeddedness, revealing alternative modes of subjectivity that resist this capture.
Understanding the complexity of this socio-technical ecosystem is essential to envisioning collective agency amid corporate extraction, inequality, and governance. Open-source initiatives, demands for transparency and equity, and reimaginings of subjectivity grounded in relationality rather than commodification reveal cracks in the dominant system. These openings suggest that the humanOS can be disrupted and reconfigured, nurturing futures that resist extraction, eugenic logics, and reinstate democratic accountability. They invite us to rethink technology not as an instrument of control, but as a medium for collective flourishing, multispecies kinship, and ecological accountability.
Within these sites of transgression, refusal, collective action, and alternative techno-ethical imaginaries confront corporate capture and neoliberal biopolitics. Acts of disobedience, technopolitical hacking, and multispecies engagement destabilize proprietary subjectivities and foster relational modes of becoming-with. Moving beyond enhancement-as-commodity, these practices cultivate shared vulnerability and reconfigure techno-social relations toward open, multispecies futures grounded in mutual care, systemic reflexivity, and democratic governance.