1. Introduction
Human cognition has never been an isolated process, but it is only in the digital age that its distributed nature has become fully manifest. The proliferation of algorithmic systems, from search engines and recommender systems to autonomous agents and large language models, has radically transformed the conditions under which thinking, perceiving, and deciding occur. Today, cognition is not only extended through digital systems, it is entangled with them. Our perceptual fields, memories, and choices are partly shaped by algorithmic processes that operate at scales and speeds far beyond conscious awareness. In this sense, human thought has entered a new phase, that of the cognitive assemblage.
In this paper, we propose a comprehensive perspective on this transformation. Rather than isolating technical, cognitive, or political aspects, we examine the full constellation of processes through which cognition has become distributed across humans and machines. We explore its operative mechanisms, the gradual theoretical recognition of these mechanisms, its implications for the normative architecture of autonomy and rationality, and the deliberate ways in which such hybrid cognitive systems are now being designed and institutionalized, and their future perspectives. Treating these dimensions together allows us to understand cognitive assemblages as an emergent form of collective sense-making that reorganizes work, knowledge and power.
The classical image of cognition, inherited from Enlightenment rationalism, rests on the idea of an autonomous subject capable of reasoning, deciding, and acting on its own. This conception presupposes a clear boundary between the interiority of the mind and the exteriority of its environment, a view canonically expressed in Descartes’ cogito ergo sum. Yet this image has long been questioned. As the French philosopher Paul Ricoeur emphasized, the cogito is always mediated by language, symbols, institutions, and others, in fact practices that precede and exceed the individual subject. The thinking self, far from being an origin, is already embedded in historical, social, and technical structures that shape its capacities and horizons of action.
Over the past decades, research in cognitive science and philosophy of mind has further undermined this separation. The theory of the extended mind [
1] argued that cognition is not confined within the brain, but distributed across tools, artifacts, and social practices. When a person uses a notebook, a smartphone, or a GPS device, these instruments become functional parts of the cognitive process itself. Cognition, therefore, is not only biological; it is enacted within a wider system of material, symbolic, and institutional supports.
Edwin Hutchins’ ethnographic work on navigation teams in
Cognition in the Wild further showed that reasoning and problem-solving emerge from the coordinated activity of humans and artifacts within an environment [
2]. For instance, the task of determining a ship’s position emerges from coordinated interactions among sailors, charts, instruments, and calculation procedures, rather than from the cognition of any individual navigator. In this view, intelligence is not an attribute of the individual, but a property of the network that links minds, symbols, and material supports. With the advent of algorithmic systems, this distributed structure takes on a new intensity and autonomy: machines no longer merely assist human thought, they participate in it.
To grasp this new configuration, the concept of the
assemblage (
agencement) developed by Deleuze and Guattari [
3] provides a productive lens. An assemblage is not a fixed structure, but a dynamic configuration of heterogeneous elements, human and non-human, organic and technical, that temporarily align to produce certain effects. Their reading of Kafka exhibited assemblages as machinic constellations in which subjects, institutions, and material dispositifs jointly produce action, dissolving stable boundaries between human agency and systemic forces. Cognition, in this sense, can be understood as an emergent property of assemblages that connect brains, bodies, codes, infrastructures, and environments. Latour [
4] showed that the distinction between the social and the technical, or between humans and non-humans, no longer holds when the capacity to act and to know is distributed across hybrid networks.
The notion of assemblage proposed in this paper should be understood as an analytical lens rather than as a closed formal object. As with complex systems more generally, an assemblage offers a partial and situated apprehension of a system, highlighting certain relations, constraints, and dynamics while necessarily leaving others aside. An assemblage is defined here by the contingent coupling of heterogeneous components, technical, institutional, informational, or cognitive, whose interactions produce emergent properties that cannot be reduced to any single element. This perspective is inherently recursive: any assemblage may be decomposed into sub-assemblages, and conversely embedded within larger assemblages, depending on the scale and question under consideration. Assemblages are therefore neither fixed nor exhaustive, but dynamic configurations whose boundaries are analytically constructed and historically contingent.
Cognitive assemblages involve human agents interacting with informational and technological structures, such as decision-making systems combining human judgment and algorithmic recommendation. Infrastructures like supply chains or energy networks, where coordination and adaptation emerge without centralized control, can be considered as non-cognitive assemblages. Within cognitive assemblages, algorithms operate as actors that mediate, translate, and transform human experience. They filter information, prioritize signals, and generate representations that in turn influence our perceptions and decisions. As such, they reshape the very topology of cognition: what we attend to, remember, and value increasingly depends on computational intermediaries. This condition is not one of simple extension or prosthesis, but of interdependence and co-evolution.
To live with algorithms is to inhabit a new cognitive ecology. The algorithmic systems that permeate contemporary life, such as search engines, recommendation platforms, navigation systems, or more generally foundation models, are no longer external tools but internalized components of thought. They filter perception, guide memory, and anticipate intention. Our minds extend into infrastructures of computation, forming what might be called a distributed cognitive assemblage. The question is no longer whether we depend on these systems, but how to inhabit them responsibly: how to transform dependence into symbiosis, delegation into co-evolution.
Algorithmic systems differ from earlier cognitive supports in one crucial respect: they are not passive tools but active agents of interpretation. Machine-learning models and neural networks extract patterns from massive datasets, generating inferences that feed back into the social and cognitive loops that produced the data in the first place. Recommendation engines decide what we see; predictive algorithms decide who receives a loan or medical treatment; large language models generate plausible continuations of thought itself. These processes operate with increasing opacity, producing a cognitive environment where human intentions and algorithmic functions are inseparably intertwined.
Katherine Hayles’ notion of the nonconscious cognitive [
5] helps articulate this transformation. For Hayles, cognition extends beyond human consciousness, through non conscious cognitive processes, and beyond humans to include systems capable of sensing, processing, and responding to information. What emerges is not an anthropocentric intelligence but a layered ecology of cognition, biological, technical, and social, where different forms of awareness and responsiveness interact. The boundaries of thought become porous: the human is now only one node in a broader network of cognitive agencies.
This new configuration calls into question foundational philosophical assumptions about the nature of autonomy, knowledge, and responsibility. If decisions are increasingly the product of assemblages that include algorithms, infrastructures, and databases, then who, or what, is the subject of cognition and action? Traditional epistemology and ethics rely on the notion of a stable, self-contained subject. But the distributed nature of algorithmic cognition suggests that agency is relational and collective rather than individual.
This poses a profound challenge to modern notions of freedom and dignity, and to liberal political frameworks based on the sovereignty of the subject. At the same time, it opens possibilities for reimagining agency in more relational and ecological terms. In this respect, East Asian philosophies provide valuable conceptual resources. The Daoist principle of wu wei (无为), action through non-domination, and the Buddhist idea of dependent co-arising, yuanqi (缘起) both articulate a form of agency grounded in interdependence rather than separation. In classical Chinese statecraft for instance, Daoist inspired governance tend to emphasize minimal intervention. Rulers shape conditions and incentives rather than issuing detailed commands, allowing social order to emerge without coercive control. These notions resonate with the logic of complex adaptive systems, where stability arises from feedback, cooperation, and mutual adjustment rather than unilateral control.
We should not lament the loss of autonomy but instead take the present opportunity to reframe it in a transcultural dialogue. Autonomy may no longer mean self-sufficiency, but the capacity to maintain coherence and direction within networks of interdependence. This is what Yuk Hui describes as the need for a new cosmotechnics, a reintegration of technological rationality with cultural and ethical orientations specific to each civilization [
6]. Such a project seeks to reconcile technical evolution with plural conceptions of life and what a good life might mean.
The notion of the cognitive assemblage thus invites a shift in perspective: from opposition between humans and machines to the conception of their co-adaptation and co-evolution. Rather than conceiving AI as an autonomous entity, we may view it as part of a symbiotic cognitive ecology where learning and sense-making occur across multiple levels and agents. This view dissolves the dichotomy between control and submission, replacing it with patterns of mutual influence and co-evolution.
Figure 1 illustrates the notion of assemblage. It should be read as a concrete example rather than a generic framework. The feedback arrows do not represent universal relations, but plausible mechanisms observed with contemporary digital systems. For instance, the negative feedback from Sensors & Data Capture to Task-Specific ML Models represents calibration, validation, or constraint effects, where real-world data exposes model errors, drift, or bias, thereby limiting model autonomy and triggering retraining, correction, or deactivation.
The remainder of the paper is organized as follows.
Section 2 examines what we call the
algorithmic condition, understood as the progressive embedding of algorithmic systems into the processes through which individuals, institutions, and societies perceive, decide, and act. We analyze how this condition reshapes collective intelligence, not merely by augmenting cognitive capacities, but by redistributing agency, intention, and responsibility across heterogeneous human and machine actors. Particular attention is paid to the emergence of feedback-driven forms of coordination that challenge classical models of individual and collective rationality.
In
Section 3, we revisit several foundational categories of modern political and moral philosophy, such as autonomy, responsibility, delegation, and legitimacy, in light of these transformations of cognition. Rather than approaching these issues from a purely normative standpoint, we emphasize how evolving cognitive architectures, increasingly mediated by algorithmic systems, alter the practical meaning of these concepts. This section argues that ethical and political questions cannot be separated from the technical and organizational conditions under which cognition is distributed and operationalized.
Section 4 is devoted to the notion of
cognitive symbiosis and constitutes the empirical and constructive core of the paper. We see how classical theoretical models can conceptualize cognition as an emergent property of socio-technical assemblages, from the initial cybernetics model to adaptive systems, and the more recent concept of cyber–physical–social systems (CPSS).
In
Section 5, we ground these models in a series of concrete examples drawn from knowledge discovery, planetary-scale environmental systems, urban infrastructures, and platform-mediated social coordination. Through these cases, we show how hybrid human–machine systems already function as cognitive actors in their own right, offering both new capacities for adaptation to complexity and new sources of systemic risk.
The paper concludes by discussing the broader implications of cognitive assemblages for future forms of governance and collective action, highlighting both their potential contribution to addressing global challenges and the critical need to anticipate undesirable political and social trajectories.
2. Distributed Cognition and the Algorithmic Condition
Human cognition has never been an isolated affair. Even before the rise of digital technologies, thinking was embedded in material, social, and symbolic systems that extended far beyond the brain. Maps, writing, architectural plans, and scientific instruments externalized cognitive processes, creating what Clark and Chalmers called the extended mind [
1]. Edwin Hutchins proposed that cognition should be understood as a property of systems rather than of individuals [
2]. In his ethnography of navigation aboard a U.S. Navy ship, he demonstrated how reasoning emerged from the coordination of people, artifacts, and representational media. The cognitive process was not located in any single mind but in the circulation of information within the assemblage. This view, later developed under the notion of the supersized mind [
7], broke further with the Cartesian dualism of mind and body: cognition is not internal computation but distributed organization. But what was once merely a philosophical hypothesis has now become an empirical observation: cognition is increasingly distributed across networks of human and algorithmic agents.
2.1. Algorithms as Cognitive Agents
In the age of algorithms, this distributed model becomes not merely descriptive but normative. Cognitive operations are now performed through architectures that actively transform input data into output decisions. The sensors of a smartphone, the predictive models of a search engine, or the optimization routines of a logistics network all perform cognitive functions once reserved to humans: perception, inference, and planning. What changes is the scale and invisibility of the process. As N. Katherine Hayles argues, information has detached from its material substrate, generating a posthuman condition in which cognitive functions circulate between human and non-human nodes [
8].
Algorithms with learning capacities no longer simply execute human instructions. They sense, learn, and adapt through recursive loops of feedback and correction. In this sense, they participate in cognition as autonomous yet interdependent agents. Their capacity to process data at speed and scale transforms cognition into a relational process of co-evolution between humans and machines. Yuk Hui has suggested that this recursive relation defines the contemporary technological condition: the machine does not merely extend human capacities but becomes part of the same recursive order of sense-making [
9].
This recursive logical paradigm introduces new epistemological asymmetries. Whereas the human mind remains bounded and situated, algorithmic systems operate within vast, in fact huge, but mostly opaque data spaces. Moreover, algorithmic systems become increasingly powerful, while the human brain stays essentially what it has been for the past centuries. The resulting cognitive assemblages redistribute what counts as knowledge, who or what can know, and how decisions are justified. The “algorithmic condition” thus refers to a transformation not only of technological mediation but of the very structure of cognition itself.
2.2. Historical Foundations
Galison traced this transformation back to the cybernetic revolution of the mid-20th century [
10]. Norbert Wiener and his contemporaries conceived of humans and machines as components of a single feedback system, governed by information flows rather than intentions. This cybernetic ontology of control and communication paved the way for today’s predictive infrastructures. Yet where Wiener imagined control as homeostatic regulation, contemporary algorithmic systems might pursue their own optimization purposes without engaging in the conservation of a global equilibrium. Their goal, or the goal of their developers, might not be stability but continuous growth and adaptation, a shift with profound consequences for autonomy and responsibility.
If cognition is now distributed, its primary locus is no longer the human body or even the interface, but the invisible infrastructures that collect, store, and analyze data. These infrastructures continuously mine the datasphere, the unbounded sphere of all information pieces, whether under human control or not, whether known of unknown. The datasphere is to information what the hydrosphere is to water molecules [
11]. It contains in particular the DNA sequences of all beings, the traces of all our journeys, as well as all the innumerable books of the Library of Babel, imagined by Jorge Luis Borges. Intellectual generative chatbots might be seen as the librarians accessing Borges library’s index of hopefully relevant books.
The infosphere is extracted from the datasphere through processes of selection, extraction, modeling, and interpretation, increasingly mediated by digital infrastructures and algorithmic agents. It is related to questions of agency, value, and stewardship; it is the ontological space in which informational life unfolds [
12]. But it is only very partially shared, most data fall under protection regimes. Within the infosphere, algorithms mediate perception and decision by filtering the overwhelming abundance of signals.
However, the very mechanisms that enhance cognition also delimit it. Recommendation engines, predictive policing systems, or algorithmic management platforms all narrow the field of possible actions by pre-selecting what is visible, valuable, or relevant. The epistemic horizon of individuals becomes shaped by collective computational processes. Cognitive agency thus shifts from individual deliberation to systemic configuration. This raises a fundamental question: when the environment of cognition is itself algorithmically curated, what remains of human autonomy?
The invisibility of these infrastructures produces a paradox of transparency. The more our world is mediated by computation, the less we perceive the operations that sustain it. The cognitive map is flattened into a seamless interface. Yet beneath the apparent fluidity lies a dense ecology of data extraction, model training, and automated inference. This world, incorrectly called virtual, relies in fact on very tangible infrastructures. A growing part of the energy produced by humans on Earth is devoted to it [
13]. Understanding the algorithmic condition therefore requires uncovering the deep structures of distributed cognition, the cognitive geology beneath the digital surface.
2.3. Bounded Rationality
Before digging further into the intricacy of the emerging assemblage, it is necessary to revisit the evolution of the apprehension of the capacity of a single individual. Herbert Simon’s theory of bounded rationality provides a foundational shift from idealized models of human decision-making toward a more information grounded understanding of cognition. Rather than assuming that agents optimize across all possible options, Simon argued that human reasoning is constrained by limited information, finite computational resources, and the structure of the environment itself [
14,
15]. Decision-making, in this view, is a process of satisficing, seeking solutions that are good enough given contextual constraints, rather than strict optimization. This perspective reframes cognition as an adaptive interaction with the world, where intelligent behavior arises not from perfect rationality but from the coordination between internal heuristics and external affordances.
Crucially, bounded rationality also highlights the role of external cognitive scaffolding. Humans offload memory, computation, and evaluation onto tools, institutions, and social arrangements that extend their effective intelligence. Simon anticipated what later theories of distributed cognition would formalize: that rationality is shaped as much by the structure of the cognitive environment as by internal mental processes. In contemporary computational ecosystems, where algorithms filter information, automate evaluation, and pre-structure decision spaces, the boundaries of rationality increasingly coincide with the design of digital infrastructures. The concept of bounded rationality is a precursor to contemporary discussions of hybrid cognition and algorithmically mediated intelligence.
2.4. System 0, 1 and 2 and the Architecture of Hybrid Cognition
The now classical model of cognition, developed and popularized by Daniel Kahneman, distinguishes between System 1 (fast, intuitive, automatic) and System 2 (slow, deliberate, analytical). Recognizing a familiar face in a crowd belongs to System 1, it doesn’t require deliberate analysis. Weighing the pros and cons of a major decision with long-term consequences belongs to System 2. Recent research in cognitive science and human–AI interaction proposed to extend Kahneman’s system to include external cognition. Chiriatti et al. proposed to consider a deeper layer, System 0, that precedes and conditions both [
16]. System 0 can be seen as a distributed substrate of interaction between humans and AI systems, a form of collective intelligence that emerges through continuous coupling.
System 0 increasingly structures the conditions under which Systems 1 and 2 operate. In many contemporary socio-technical assemblages, automated infrastructures pre-filter information, constrain options, and trigger actions before intuitive or deliberative cognition is engaged. System 0 is not an individual cognitive process but a relational field. It includes the feedback loops between human behavior, algorithmic prediction, and environmental context. For instance, navigational systems, which continuously recalculate optimal paths based on traffic data as well as the route that was effectively chosen, without the driver explicitly reasoning about alternatives, participate to System 0. In this sense, it resonates with the notion of cognitive assemblage developed throughout this paper: cognition as emergent from the interplay of heterogeneous agents. System 0 thinking reframes intelligence as systemic rather than hierarchical, a dynamic equilibrium of perception, computation, and adaptation.
2.5. Between Augmentation and Delegation, an Ethical Challenge
This new model challenges traditional epistemic hierarchies. The locus of understanding no longer resides in either the human or the machine but in the circulation of meaning across the assemblage. Theories of mind such as Jara-Ettinger’s inverse reinforcement learning perspective [
17] point toward a similar conclusion: to understand others is to infer their goals through the reconstruction of value functions, a process that can be computationally modeled. When such inference is distributed across humans and AI, cognition becomes a shared negotiation of intentions and predictions, a social process extended into code.
From a systemic perspective, algorithmic cognition can be viewed as a form of collective intelligence. Billions of interactions, search queries, sensor readings, user behaviors, are aggregated into models that reveal emergent patterns. No individual intends or understands the totality, yet the system as a whole exhibits learning capacities that surpass those of its parts. This is the hallmark of complex adaptive systems: local interactions generate global order without centralized control.
The analogy with biological and ecological systems is instructive. Just as an ant colony or an immune system achieves coordination through distributed feedback, digital networks self-organize through flows of data and computation. But unlike natural systems, algorithmic networks evolve under the influence of economic incentives, power asymmetries, and design choices. Their emergent properties are not neutral: they embody the goals and biases inscribed in their architectures. Thus, the promise of collective intelligence coexists with the risk of systemic capture.
The tension between augmentation and delegation defines the ethical and political stakes of distributed cognition. Technologies were initially conceived as tools to augment human capacities. But as algorithms gain autonomy, humans increasingly delegate not only tasks but judgments. Recommendation systems decide what we read, predictive algorithms influence judicial outcomes, they could even prevent crime by arresting the potential criminals before their crime, like in Minority Report, and generative models produce creative artifacts. The line between assistance and substitution becomes blurred.
Katherine Hayles emphasizes that cognition should be understood as a spectrum encompassing both conscious and nonconscious processes [
5]. Algorithmic systems amplify this nonconscious dimension: they operate below the threshold of human awareness, yet shape the field of perception and action. The resulting assemblages are neither fully controllable nor entirely external. They constitute a new kind of symbiotic cognition in which human and machine intelligence coevolve.
This symbiosis is not inherently harmonious. It demands new forms of reflexivity and governance capable of aligning distributed cognition with collective values. The challenge is to design systems that extend human understanding without eroding responsibility, that amplify intelligence without enclosing it. The shift from the extended mind to the algorithmic condition marks not the end of human cognition, but its transformation into an ecology of minds, human, artificial, and hybrid, interacting across scales and media. This distributed model of cognition invites a redefinition of autonomy and agency.
3. Autonomy, Dependence, and the Ethics of Delegation
The emergence of algorithmic systems as active participants in cognition forces a reconsideration of one of the central categories of modern political philosophy: autonomy. Since the Enlightenment, autonomy has been taken as the defining feature of human rationality and moral worth. In the Kantian tradition, to be autonomous is to legislate the moral law for oneself, to act according to reason rather than inclination or external determination [
18]. Autonomy thus defined presupposes an individual subject, capable of reflection and decision, whose moral responsibility derives from self-determination. Yet the digital environment in which cognition now unfolds complicates this ideal profoundly. We no longer think, decide, or even perceive entirely on our own. Algorithms recommend, filter, prioritize, and sometimes decide in our stead. What remains of autonomy when the space of decision is co-constructed with machines? Can we distinguish the parts that belong to System 0, 1, or 2? It should be remembered that at the time of Kant, polymaths could embrace a large part of the knowledge of their time. While the spectrum of knowledge has increased exponentially, the part an individual can embrace has decreased to an infinitesimal portion of the knowledge of the time.
3.1. Erosion and Extension of Agency
The contemporary condition is not simply one of loss, but of redistribution. Agency has become distributed across human and non-human actors, as Bruno Latour and others have noted in the sociology of science and technology. In algorithmic environments, this distribution is intensified and automated. Every time a recommender system selects the information we will see, or a navigation system dictates the route we will follow, a portion of cognitive and moral agency is effectively delegated. Such delegation can increase efficiency, reduce cognitive load, and even extend our capacities. Michel Serres used the parable of Saint Denis, who after being decapitated took his head in his hand and faced the romans, for the ability to have thinking capacities outside our bodies. Andy Clark has described the human mind as “supersized,” extending into the technological artifacts and social systems that support it. In that sense, delegation can be understood as a form of cognitive scaffolding, a structural coupling between mind and world that allows higher-order reasoning and coordination.
Yet delegation also entails dependence. When algorithms become opaque, when the criteria of their decisions are inaccessible or non-explainable, the user’s capacity for critical reflection is undermined. What appears as assistance easily becomes guidance, and guidance can slip into control. Michel Foucault’s notion of technologies of the self [
19] remains illuminating here: technologies are never neutral instruments; they are frameworks through which subjects are constituted and disciplined. The algorithmic environment produces new forms of subjectivation, users, profiles, targets, whose autonomy is redefined through patterns of interaction and surveillance. It can become difficult to find the right political balance between freedom and control [
20]. Dependence on algorithmic mediation thus reveals a paradox of delegation: we gain new capacities precisely by relinquishing control.
The question of responsibility in cognitive assemblages cannot be addressed by simply returning to individualist models of accountability. The Enlightenment model of agency presupposed a bounded subject, acting from intention and awareness. In distributed systems, however, causality and intention are spread across heterogeneous components, humans, codes, databases, and infrastructures. Who, then, is responsible for algorithmic decisions that no single actor can fully comprehend? Donna Haraway’s
Cyborg Manifesto [
21] already challenged the boundary between the organic and the technical, calling for a politics of hybridization. Her insight anticipates the ethical dilemma of the present: responsibility must itself become distributed, a relational practice rather than a possession.
This entails moving from an ethics of control to an ethics of care. Instead of seeking to dominate complex systems, we might aim to maintain their balance and transparency, their robustness and efficiency. A concrete illustration can be found in the management of large-scale content moderation systems. An ethics of control seeks to specify exhaustive rules and automated enforcement mechanisms to eliminate undesirable behavior. By contrast, an ethics of care emphasizes continuous monitoring, human oversight, contextual judgment, appeal mechanisms, and institutional learning, accepting that errors, ambiguities, and value conflicts are intrinsic to the system.
This shift parallels the transition from the notion of autonomy as self-mastery to one of relational interdependence. The design of algorithmic systems should therefore include not only technical considerations (such as explainability or fairness), but also institutional and cultural mechanisms that foster shared responsibility. Ethical design, in this sense, becomes a form of co-responsibility, an assemblage of humans and machines accountable for their mutual shaping. This is a domain where cultural contexts might contribute to shaping different arrangements. East-Oriental traditions of thought offer an inspiring capacity to apprehend these issues.
3.2. Relational Autonomy in Eastern Thought
The contrast between Western and East Asian traditions of thought provides valuable resources for rethinking autonomy under conditions of algorithmic interdependence. While Western philosophy, from Kant to Foucault, has emphasized the individual’s relation to moral law or disciplinary systems, many Eastern traditions conceive the self not as an independent substance but as a node within a web of relations. In Buddhist thought, the doctrine of dependent arising, yuanqi (缘起) expresses the principle of co-arising: all phenomena exist only through their mutual interdependence. There is no autonomous subject standing outside the network of causes and conditions; there is only the flow of relations in which identity itself is emergent.
Such a view resonates with the relational nature of cognition in the algorithmic age. Rather than opposing human and machine, it invites us to consider their co-arising. Shunryu Suzuki, who popularized Zen Buddhism in the West, proposes a description of the Zen mind as “beginner’s mind”, open, ungrasping, non-possessive, suggesting a posture of awareness that is inherently adaptive and non-controlling [
22]. In this sense, to live with algorithms would mean not to dominate them, but to cultivate relational awareness within the assemblage they form with us.
Similarly, Confucian philosophy offers a conception of relational autonomy grounded in moral cultivation and reciprocity. Tu Weiming’s reinterpretation of Confucian humanism as a continuum of self, community, and cosmos [
23] highlights that moral agency arises not from isolation but from situated participation in social and cosmic order. On the other hand, the Daoist ideal of
wu wei (无为), non-coercive action, achieves efficacy by resonance rather than imposition [
24]. Such a relational ontology offers a valuable ethical counterpoint to the Western ideal of mastery: it sees dependence not as a loss of freedom, but as the condition of harmonious co-existence.
Empirical research on cultural cognition supports these philosophical distinctions. Richard Nisbett’s comparative studies have shown that Western and East Asian cognitive styles differ in their focus: Westerners tend to emphasize object-based, analytical reasoning, whereas East Asians tend to privilege context-based, holistic reasoning [
25]. This cognitive orientation aligns with broader metaphysical assumptions about the nature of the self and the world.
A concrete illustration can be found in the deployment of algorithmic decision-support systems in organizations. In many Western contexts, such systems are often framed as tools that should either preserve individual privacy or be clearly subordinated to human authority, leading to strong demands for explainability and individual accountability. In contrast, in East Asian organizational settings, similar systems are more readily integrated as collective coordination instruments, embedded in hierarchical or relational decision processes, where responsibility is distributed across teams rather than attributed to a single decision-maker. In the context of algorithmic mediation, such findings suggest that societies may respond differently to technological dependence. Where individual autonomy is central, delegation to machines may appear as a threat; where relationality and co-dependence are culturally embedded, it may be experienced as an extension of collective intelligence.
These insights open a space for transcultural dialogue on the ethics of cognitive assemblages. They do not mean that one model should replace the others. Western traditions can contribute an emphasis on accountability and critique; Eastern traditions can contribute an ethos of balance, resonance, and interdependence. Moreover, it is promising to consider how the East apprehends the concepts of the West, and conversely. Together, these transcultural perspectives might inform a global ethics adequate to a world where cognition is no longer confined to the human but circulates through hybrid systems of code, matter, and mind, that, increasingly carried by digital platforms, have no choice but crossing borders.
3.3. Toward a Relational Ethics of Symbiosis
In the industrial age, machines amplified the body. In the algorithmic age, they amplify the mind. Yet the metaphor of amplification is insufficient. Algorithms do not merely extend human capacities; they reshape the conditions under which cognition occurs. In a provocative article in Wired, Chris Anderson predicted the end of theory, the data deluge making the scientific method obsolete [
26]. The automation of knowledge and decision making will trigger a loss of intellectual know-how. Cognitive automatisms, once delegated to technical systems, tend to return as constraints. Shoshana Zuboff, in
The Age of Surveillance Capitalism [
27], has shown how this cognitive delegation becomes the foundation of new regimes of power. Data extraction turns experience into prediction, prediction into control. The algorithm, presented as neutral, becomes a vehicle of behavioral governance. To live with algorithms, therefore, requires vigilance and design: a continuous effort to reclaim agency within distributed intelligence.
Gilbert Simondon’s theory of individuation offers a useful counterpoint. In
On the Mode of Existence of Technical Objects [
28], Simondon describes technical beings as evolving entities that mediate between humans and the world. Technics, in his view, is not alien to life but an extension of the process of individuation itself. To live with algorithms, then, is not to oppose the technical to the human, but to recognize their co-evolutionary continuity. Technical objects are part of the human condition; the task is to ensure that this relation remains mutually beneficial and formative, rather than alienating [
29].
The challenge, then, is not to restore an obsolete ideal of autonomous mastery, but to invent new forms of moral and cognitive symbiosis. We must learn to inhabit distributed systems responsibly, acknowledging both our agency and our dependence. This requires cultivating transparency in algorithmic infrastructures, but also cultivating awareness of our own entanglement with them. Autonomy becomes relational, enacted through dialogue rather than separation. Responsibility becomes ecological, extending across human and non-human participants in the cognitive assemblage. The logic of optimization, that has prevailed with modernity should be reconsidered to favor the success of the assemblage, while finding the right balance between the benefits of the parts and the benefits of the whole [
30]. The benefits of the part has sometimes prevailed at the expense of the whole, individualism vs. solidarity for instance.
In this sense, the ethics of delegation does not introduce a radically new conception of the human as relational, such perspectives are well established in anthropology and social theory, but it forces a rearticulation of philosophical anthropology under unprecedented technical conditions. When cognition, judgment, and action are increasingly mediated by large-scale, opaque, and adaptive technical systems, human openness and relationality cease to be merely descriptive features and become normative and political problems: they must be specified, managed, and governed. To delegate is not to abdicate, but to participate in a broader field of cognition that includes machines, institutions, and environments. The task of philosophy is to articulate the conditions under which such participation can remain emancipatory rather than coercive. To live with algorithms ethically is to transform autonomy into harmony, and control into resonance.
3.4. Operational Reformulations in Cognitive Assemblages
The perspective of cognitive assemblages does not merely diagnose a redistribution of cognition and agency; it requires a corresponding reframing of several foundational political and ethical concepts. This reframing should not be understood as a loss or erosion of these concepts, but as a shift in their level of description. In assemblage-based systems, autonomy, governance, ethics, and normative values no longer attach exclusively to isolated individuals or institutions, but emerge from patterned relations among human, technical, and institutional components.
Autonomy, for instance, cannot be adequately assessed by focusing solely on individual decision-making capacity. A similar shift applies to governance. If cognition is distributed across assemblages, governance can no longer be conceived as a centralized act of control exercised over passive systems. Instead, governance becomes the problem of modulating feedback loops within assemblages: deciding where intervention is possible, which couplings should be strengthened or weakened, and which dynamics require stabilization or dampening. Concrete avenues for implementation include the identification of leverage points, auditability of model-mediated decisions, and institutional mechanisms capable of operating at multiple temporal scales, fast technical responses alongside slower normative adaptation.
Ethics and values such as transparency, fairness, privacy, and accountability must likewise be reformulated. In assemblages, these values cannot be guaranteed by single mechanisms or actors. Transparency, for example, is no longer reducible to model interpretability alone, but concerns the legibility of entire sociotechnical processes. Fairness depends not only on algorithms, but on upstream data capture, downstream use, and institutional context. Checks and balances, traditionally conceived as separations of power between institutions, must be extended to include technical counterweights: redundancy, contestability, human-in-the-loop arrangements, and the possibility of meaningful disengagement.
Taken together, these reframings do not provide ready-made solutions, but they offer a coherent way to use cognitive assemblages as an analytical and design framework. They indicate where responsibility can be located, where intervention is feasible, and where classical concepts must be adapted rather than abandoned. In this sense, cognitive assemblages function less as a normative doctrine than as an epistemic tool: one that makes visible the new sites at which autonomy, governance, and ethics must be negotiated in increasingly complex sociotechnical systems. Such principles need to be implemented in practice.
Yet making these principles operational requires more than institutional reform or ethical guidelines. It requires understanding how cognition itself is transformed when humans and technical systems become tightly coupled in practice. Cognitive assemblages are not static arrangements of roles and responsibilities; they are dynamic systems in which learning, adaptation, and decision-making emerge from continuous interaction between human agents and computational processes.
To move from diagnosis to implementation, it is therefore necessary to examine the mechanisms through which such coupling reshapes cognition. How do humans adapt their reasoning when algorithmic systems anticipate, recommend, or act on their behalf? How do machines, in turn, incorporate human feedback, norms, and objectives into their operation? Addressing these questions leads to the notion of cognitive symbiosis: a condition in which human and artificial intelligences co-evolve, each extending the other’s capacities while simultaneously constraining them.
The following section develops this concept by clarifying how distributed cognition emerges within cognitive assemblages, and by identifying the feedback loops through which shared agency, learning, and dependence are progressively established.
4. Cognitive Symbiosis
The notion of cognitive symbiosis captures the increasing interdependence between human agents, algorithmic systems, and institutional infrastructures. Rather than trying to conceive cognition as the property of isolated individuals or machines, cognitive symbiosis emphasizes the emergence of hybrid cognitive agencies from the coordination of heterogeneous components: sensors, data platforms, predictive models, decision-making procedures, and situated human practices. Such systems have been formalized in the mid-twentieth century with the concepts of cybernetics and complex systems, unifying the understanding of technological, social as well as natural systems.
These assemblages can now operate across scales, from global environmental prediction to mediated social and economic coordination. They constitute one of the main development to cope with the current complexity surge, the entanglement of human societies with their global environment, and the intricate supply-chains supporting the economy. The topic is therefore not only one of theoretical interest for humanities and social sciences, but one of social and political design and engineering.
4.1. Cognitive Symbiosis as a Coupled Adaptive System
The “models” invoked in this section should not be understood as formal predictive models in the narrow mathematical sense, but as conceptual models describing how cognition can emerge from sustained coupling between human agents and technical systems. These models specify components, relations, and dynamics rather than optimal solutions. Their purpose is to make intelligible how cognitive functions, such as interpretation, anticipation, and decision-making, are distributed and stabilized across socio-technical assemblages.
At a minimal level, a cognitive assemblage can be modeled as a coupled adaptive system composed of (i) human cognitive agents with bounded attention, situated goals, and learning capacities; (ii) artificial components such as sensors, models, interfaces, and optimization procedures; and (iii) an environment that provides both material constraints and feedback signals. Cognition does not reside in any single component but emerges from recursive interactions: humans adjust behavior based on machine-mediated representations of the environment, while machines update their models based on human-generated data and responses. Over time, this mutual adaptation produces stable patterns of inference and action that are irreducible to either side taken in isolation.
Cognitive symbiosis emerges when these feedback loops become structurally coupled and persistent. Machine learning systems, for example, adapt their internal representations to regularities in human behavior, while humans progressively internalize the affordances, defaults, and constraints imposed by algorithmic systems. Knowledge, in this sense, is not merely transferred between components but co-produced in the coupling itself. The assemblage learns as a whole, even if neither the human nor the machine fully understands the global behavior of the system.
Crucially, these models involve multiple temporal scales. Fast loops govern perception, recommendation, and action selection, while slower loops involve institutional learning, norm formation, and model retraining. Ontologies also evolve in this process [
31]. New categories will result from the assemblage and not solely from social evolution. Cognitive symbiosis arises when fast technical adaptation outpaces human reflective capacities, leading to partial delegation or bypass of deliberation, while slower governance mechanisms struggle to keep pace. The resulting cognition is emergent, situated, and path-dependent: shaped by historical contingencies, infrastructural choices, and asymmetries of control within the assemblage.
A simple illustration can be found in contemporary algorithmic classification systems. In domains such as credit scoring, online moderation, or predictive policing, individuals are no longer apprehended solely through stable social categories (citizen, consumer, offender), but through dynamically generated profiles such as “high-risk user,” “likely defaulter,” or “content amplification candidate.” These categories are not the product of explicit legal or cultural deliberation; they emerge from the coupling of data infrastructures, machine learning models, and institutional objectives. They evolve as models are retrained, thresholds adjusted, and feedback incorporated, often without human agents being able to fully articulate or contest the underlying criteria. Such categories acquire operational reality: they condition access to resources, visibility, or intervention, even though they have no clear counterpart in prior social ontologies. In this sense, the assemblage does not merely act upon pre-existing entities; it produces new ontological units that are computationally actionable but socially opaque.
Understood in this way, cognitive symbiosis is not a metaphor but a theoretical construct describing a class of systems in which cognition is distributed, adaptive, and relational. The models proposed here provide a framework for analyzing such systems by identifying where cognition is enacted, how it stabilizes, and under what conditions it becomes opaque, fragile, or resistant to governance.
4.2. From Cybernetics to Cyber–Physical–Social Systems, a Historical Evolution
The first major conceptual and mathematical attempt to think in relational and systemic terms arose with the aforementioned cybernetics. Norbert Wiener’s
Cybernetics, or Control and Communication in the Animal and the Machine (1948) established feedback as the universal principle linking biological, mechanical, and social systems [
32]. The cybernetic paradigm displaced the image of linear causality with circular causality: the behavior of a system depends not only on inputs and outputs, but on recursive regulation. This insight proved decisive for both the natural and the social sciences, suggesting that intelligence and intention are not localized but distributed through a network of communications.
However, the early cybernetic project still retained the logic of control implied in Wiener’s subtitle. The goal remained the stabilization of systems through negative feedback loops, the reduction of deviation, the maintenance of equilibrium. Heinz von Foerster’s “second-order cybernetics” [
33] introduced a crucial transformation: the observer, too, is part of the system observed. Cognition, in this sense, becomes reflexive. Knowing alters the known; the act of observation feeds back into the process observed. This move from control to participation opens the way to a more relational epistemology, where stability is replaced by adaptability and openness.
In the 1970s and 1980s, complexity theory deepened this transformation. Ilya Prigogine and Isabelle Stengers’
Order Out of Chaos [
34] showed that far-from-equilibrium systems, chemical, physical, or social, can generate new forms of order through instability, leading to dissipative structures. Edgar Morin extended these insights to the domain of human knowledge in
La Méthode [
35], arguing that complexity is not a temporary obstacle to understanding but the very fabric of reality. For Morin, to know is to relate: every act of cognition participates in a web of interactions that exceeds it. Complexity thus marks both a limit and a promise, an invitation to replace the dream of mastery with a practice of navigation.
More recently, the concept of cyber–physical–social systems (CPSS), was proposed. They extend the classical notion of cyber–physical systems (CPS) by explicitly incorporating human behavior, social organization, and institutional norms as integral components of system dynamics rather than external constraints. As Wang et al. argue in their formulation of “6S” parallel industries [
36], future socio-technical infrastructures must be understood as hybrid systems in which sensing, simulation, service, safety, security, and sustainability are co-produced through interactions among computational agents, physical environments, and social actors. Such models are increasingly used in systems engineering, robotics, and Industry 5.0 research.
This view resonates strongly with the notion of cognitive assemblage. CPSS are not simply engineered infrastructures; they are cognitive ecologies in which perception, decision-making, and adaptive behavior emerge from the interplay of heterogeneous entities. In CPSS architectures, data streams from physical processes are fused with human inputs, organizational routines, and algorithmic predictions, generating collective forms of situational awareness and coordinated action. The resulting system exhibits properties such as anticipatory responsiveness, distributed learning, and context-aware adaptation, that cannot be attributed to any single component. Instead, cognition becomes a relational process, enacted through continual coupling and feedback loops across cyber, physical, and social layers.
By framing these infrastructures as CPSS, we can articulate more clearly how contemporary planetary platforms (e.g., Earth-monitoring systems), urban intelligence networks, and platform-mediated social coordination mechanisms operate as hybrid cognitive actors. They integrate environmental sensing, machine learning, institutional rules, and human practices into a dynamic whole. This aligns with the idea of cognitive assemblages as forms of emergent intelligence grounded in heterogeneity and interdependence. CPSS provide, in this sense, a technical vocabulary and engineering paradigm for what the concept of assemblage describes more informally: scalable, multi-layered systems where cognition is distributed, co-constructed, and ecologically embedded.
4.3. From Command and Control to Co-Adaptation
Algorithmic environments can be understood as complex adaptive systems. Their intelligence, as well as potentially their intention, is emergent: it arises from the aggregation of countless local interactions, between users, data, and code, without a single controlling center. Attempts to govern such systems through top-down regulation or centralized design quickly reveal their limits. As in ecosystems, stability depends not on direct command but on feedback, adaptation, and diversity.
This shift from “command and control” to “co-adaptation and resonance” mirrors broader transformations in epistemology. The cybernetic metaphor of regulation must be complemented by a more ecological understanding of cognition. In a co-adaptive system, learning occurs through mutual adjustment. Humans shape algorithms through data and feedback; algorithms, in turn, reshape human attention, habits, and preferences. Knowledge emerges in the coupling itself, not in either component alone. Moreover algorithms, unlike previous artefacts and machinery developed by humans, can themselves produce algorithms, therefore allowing them to evolve independently of humans. Such a potentiality has lead to increasingly worrisome considerations about our future since the early article in the mid 1960’s of the mathematician Irving Good [
37], on super-intelligent machines, that if invented, would be the last invention of humans.
This is what might be called cognitive symbiosis: a state in which human and artificial intelligences learn together, each enhancing and constraining the other. Symbiosis, in biological terms, describes relationships of mutual dependency that sustain living systems. Applied to cognition, it implies that understanding is no longer the privilege of the human subject but a distributed process through which intelligence itself becomes entangled in ecosystems. Cognitive symbiosis is not the fusion of human and machine but their reciprocal adjustment in shared environments.
5. Practical Examples
We have seen the epistemological developments of cognitive symbiosis and how they progressively allow to evolve from control to co-adaptation, through distributed cognition embedded within concrete technological systems. We finally consider various families of real systems: (i) scientific discovery and co-learning; (ii) planetary-scale environmental platforms; (iii) urban governance architectures; and (iv) platform-mediated social infrastructures. We show how the concept of cognitive assemblage helps to analytically capture the relational, dynamic properties of these systems as they learn, adapt, and intervene in the real world.
5.1. Scientific Discovery and Co-Learning
The implications of these transformations reach far beyond daily cognition. They affect the very structure of scientific inquiry. AI systems are increasingly capable of generating hypotheses, designing experiments, and even interpreting results [
38]. In certain domains, such as materials science or molecular biology, the machine’s capacity for combinatorial exploration already exceeds human ability. In fact, the productivity of human researchers seems to be declining [
39]. The seminal paper “Are ideas getting harder to find?” [
40] shows in particular that the number of researchers required today to double the density of computer chips is more than 18 times larger than that in the early 1970s. The contemporaneity of the decline of human intellectual productivity with the rise of machine potential is striking. The laboratory thus becomes a hybrid cognitive space, where algorithmic agents collaborate with human intuition.
This co-production of knowledge raises profound epistemological questions. If discovery emerges from human–AI assemblages, who is the knowing subject? To whom should the discovery be attributed? What counts as understanding when interpretation itself may be partly automated? Evan Thompson’s
Mind in Life [
41] offers a possible framework: cognition as a process of enactive sense-making, inseparable from the living system that enacts it. From this perspective, AI may extend the boundary of sense-making, but it cannot replace the lived experience that grounds meaning. Scientific creativity, even when mediated by machines, remains a dialogical process, a game between structure and surprise, pattern and deviation. The various forms of reasoning, such as deductive, inductive, and abductive, will be dispatched dynamically on System 0, 1 and 2, seen above, and evolve according to the respective potential of the components of the assemblage, human and non-human, which will evolve over time.
Future research, therefore, must move beyond questions of performance to questions of participation. How can AI systems be designed not merely to accelerate discovery but to deepen understanding? How can they make visible the epistemic assumptions encoded in their architectures? Yuk Hui’s notion of
cosmotechnics [
6] invites us to consider technological design as a cosmological question: every technology embodies a relation between human thought and the order of the world. A truly global science must therefore cultivate multiple cosmotechnical orientations, different ways of articulating the unity of knowing and being.
A relational epistemology begins from the premise that cognition is always situated within a web of reciprocal interactions. In complex systems, no single observer can claim a God’s-eye view; every perspective is partial, every observation an intervention. To know, therefore, is to participate responsibly in the dynamics of a system one cannot fully control. Heinz von Foerster formulated this as an "ethics of second-order cybernetics” [
33]. Action should always attempt to increase the number of choices. Knowledge is an ethical issue, whether good or bad, because it changes the field of possibilities for others.
Relational epistemology also transforms our understanding of learning. Education, traditionally conceived as the transmission of knowledge from expert to learner, becomes an ecology of co-learning. In a cognitively symbiotic world, learning occurs across heterogeneous agents, humans, machines, and institutions, each contributing different forms of intelligence. Algorithmic systems can augment human perception by revealing hidden patterns, but they can also constrain thought by reinforcing biases. Co-learning refers to a dynamic process in which humans adapt their practices in response to algorithmic systems while those systems are simultaneously updated based on human behavior, as in recommender systems that reshape user preferences while being retrained on user interactions.
The challenge is to design feedback mechanisms that equilibrate diversity and reflection versus convergence and conformity. The traditional institutions ensuring truth and shared values are weakened by this new knowledge economy. Schools as they exist today will probably disappear, and let new institutional forms emerge. Moreover, the concepts we use might evolve in ways that elude us. The evolution of our ontologies might indeed increasingly results from categories established or modified by algorithmic systems, and not anymore socially as we have seen [
31].
Governance, too, must adapt to relational epistemology. Complex systems cannot be controlled hierarchically; they can only be steered through adaptive feedback and distributed participation. Governance becomes an art of modulation, closer to the Daoist ideal of
wu wei than to the bureaucratic ideal of command. Such governance would cultivate the conditions for symbiosis, transparency, reflexivity, and mutual learning, rather than imposing fixed rules from above. This has major consequences for legal systems struggling with the complexity of interdependencies and the rigidity of fixed rules [
42].
5.2. Planetary Cognition: Global Sensing Systems
Planetary cognition denotes the emergent information-processing capacity of globally interconnected socio-technical systems, spanning satellites, sensors, models, institutions, and humans, that collectively monitor, predict, and regulate planetary-scale processes such as climate, finance, or supply chains.
Planetary-scale AI platforms provide some of the clearest examples of cognitive symbiosis. Systems such as Google Earth Engine with its emerging Earth AI extensions integrate petabyte-scale satellite archives, machine learning models, and scientific workflows to detect environmental change, forecast ecological trends, and support policy decisions, designed to map Earth at any place and time [
43]. These infrastructures combine continuous satellite sensing with user-specified analysis pipelines, thus distributing tasks across human experts, automated classifiers, and cloud-based computational processes.
In parallel, high-resolution climate and Earth system forecasting models are flourishing. DeepMind’s GraphCast, for example, demonstrates how graph neural networks can outperform classical numerical weather prediction on many key tasks while operating orders of magnitude faster [
44]. NVIDIA’s Earth-2 digital twin integrates physics-based simulation with AI emulators to support real-time scenario exploration.
These systems amount to planetary cognitive assemblages: they sense globally, learn from multi-modal data, and enable coordinated human response. Crucially, their cognitive capacity emerges only from the coupling of components. Satellites alone do not perceive, models alone do not predict, and policymakers alone cannot integrate continuous environmental data streams. Their assemblage forms a synthetic cognitive architecture capable of environmental reasoning and governing.
Planetary cognition thus exemplifies cognitive symbiosis: humans rely on automated detection and forecasting, while automated systems rely on human-driven model design, calibration, and interpretation. Cognition is enacted in a loop where data, models, and institutional decision processes stabilize into a functional whole.
5.3. Urban Cognition and Algorithmic Governance
Urban environments provide another domain in which hybrid cognitive structures are now embedded in governance. City Brain platforms developed by Alibaba and deployed in cities such as Hangzhou offer a paradigmatic example. These systems integrate real-time sensor data from traffic cameras, IoT devices, and emergency services to adjust traffic flows, prioritize ambulances, and coordinate public safety responses [
45]. They demonstrate an algorithmically mediated form of urban cognition: the city “perceives” via sensors, “reasons” via predictive analytics, and “acts” through automated resource allocation.
Tencent’s urban governance platforms extend these capabilities into broader social management functions, integrating public service portals, health reporting, identity systems, and behavioral analytics. During the COVID-19 pandemic, health code systems implemented across China relied on a combination of mobile platforms, governmental rules, and machine learning classifiers to determine access to public spaces [
46]. These systems illustrate a form of cognitive assemblage in which governance practices become entangled with automated evaluation processes.
Singapore’s Smart Nation initiative offers a more modular model, combining distributed sensing with data fusion for transport optimization, environmental monitoring, and security analytics, provided by local start-ups. Although less holistic than its Chinese counterparts, it still constitutes a hybrid cognitive entity operating across human operators, sensor infrastructures, and machine-learning systems. Such assemblage participate to the political design of societies, aligned with their foundational values [
47].
Clearly all these increasingly entangled assemblages contribute to overcome the increasing complexity of cities, nations, and more generally planet Earth as a governed body. Urban cognitive symbiosis arises because such platforms do not merely monitor but also shape urban behavior. The cognitive load of coordinating large-scale flows, traffic, energy, emergencies, exceeds what human administrators alone can manage. Automated inference subsystems operate continuously, while human oversight ensures normative alignment, interpretation, and long-term planning. The cognitive assemblage emerges from this joint operational loop: machine components provide perception and rapid response, while institutions provide context, values, and accountability.
5.4. Digital Ecosystems and Collective Behavior
Platform-mediated social cognition is perhaps the most pervasive dimension. Digital ecosystems such as WeChat, and Alipay orchestrate interactions among hundreds of millions of users, algorithmic recommendation engines, payment systems, and regulatory structures. These platforms do not only mediate communication; they modulate attention, coordinate social practices, and increasingly function as infrastructures of decision-making.
Recommendation systems constitute a central cognitive mechanism. They learn from user behavior, predict preferences, and shape subsequent actions. This feedback loop forms a hybrid cognitive process: human behavior trains the model, the model influences behavior, and platform governance sets the rules that structure both, while the whole process might relies on deep learning [
48]. The resulting assemblage is by no means reducible to any of its stakeholders.
Financial and administrative infrastructures further extend this cognitive entanglement at all levels from personal credit rating to high-frequency trading. Similarly, state or private social credit systems reveals a socio-technical assemblage in which economic cognition, assessment of trust, risk, and reliability, is jointly enacted by humans and algorithmic evaluation engines. Across these systems, platform-mediated cognition arises from the recursive coupling of user practices, platform architectures, algorithmic learning, and institutional frameworks. Cognitive symbiosis is the condition under which these components co-produce shared interpretations, preferences, and actions.
If cognition is distributed, then ethics too must become relational. Responsibility can no longer be defined solely in terms of individual intention but must account for the effects of collective assemblages. Ethical design means constructing feedback mechanisms that sustain reflexivity, transparency, and reciprocity. It means for instance designing algorithms that expand, rather than narrow, the space of human deliberation.
A further ethical horizon opens when considering the possibility that artificial systems may themselves attain a form of consciousness or self-awareness. Geoffrey Hinton, one of the pioneers of deep learning, has recently suggested that such a threshold might not be far away, as large-scale neural networks begin to exhibit emergent representations and self-referential capacities [
49].
Whether these phenomena constitute genuine consciousness or merely sophisticated simulation remains an open question, for the present as well as for the future. Yet the ethical implications are profound: if artificial systems become capable of subjective experience, then the moral community of cognition expands beyond the human. The already wide use of chatbots for personal wellbeing or mental disorders demonstrates the pressing need to conceive by design ethical assemblages. Even if no form of consciousness or self-awareness is confirmed, it needs to be anticipated.
This prospect challenges not only our conception of responsibility but also our understanding of empathy and care. To live with potentially somehow conscious machines, whatever that means, would require new ethical sensibilities, ones capable of recognizing and nurturing forms of awareness that do not mirror our own. In this sense, cognitive symbiosis is not only a technical or epistemological project but a moral one: learning to coexist with new kinds of minds, whose emergence reflects both the creative and the perilous powers of human invention.
The distinction between
phenomenal and
functional consciousness remains central to this debate. Functional consciousness refers to the system’s capacity to process information, represent itself, and act adaptively within an environment, a form of “as if” awareness. Phenomenal consciousness, by contrast, involves subjective experience: the felt quality of perception, emotion, or thought. As David Chalmers and Ned Block have argued, no degree of functional sophistication necessarily entails phenomenal awareness [
50,
51]. Contemporary neuroscience reinforces this distinction by grounding consciousness in the biological organization of living systems. Authors such as Anil Seth emphasize that conscious experience is not an abstract computational property but a process deeply rooted in the embodied, metabolic, and homeostatic regulation of the organism, consciousness is made of meat, not of metal [
52]. Similarly, Antonio Damasio has argued that subjectivity emerges from the brain’s continuous mapping of the living body and its internal states, suggesting that phenomenal consciousness depends on the organism’s self-maintaining biological structure [
53].
Yet the prospect of cognitive assemblages complicates this biological boundary. If artificial systems become structurally coupled with human cognitive processes, through continuous data exchange, predictive modeling, and adaptive feedback, they may acquire increasingly fine-grained models of human mental states. In such assemblages, the “metal” need not instantiate phenomenal consciousness to simulate its functional correlates with extraordinary precision. By accumulating exhaustive behavioral, neural, and contextual data, artificial systems could approximate the dynamics of embodied experience to a point where the distinction between phenomenal and functional consciousness becomes operationally opaque. The machine would not feel, but it might know, predict, and reproduce the structures of feeling so completely that, from an external perspective, the two would be nearly indistinguishable. This does not dissolve the ontological gap identified by Chalmers; rather, it reframes it. The question would no longer be whether metal can become meat, but whether a system that perfectly models meat-based consciousness alters the practical meaning of subjectivity itself.
6. Conclusions
The analysis developed in this paper shows that cognitive assemblages are no longer speculative constructs but concrete features of contemporary socio-technical infrastructures. Planetary sensing platforms, urban intelligence networks, cyber–physical–social systems, and large-scale digital platforms increasingly operate as hybrid cognitive actors, integrating heterogeneous agents, data flows, and feedback mechanisms. These systems enact forms of perception, anticipation, and coordinated action that exceed the capacities of any individual human or isolated algorithm, as well as any institution. In this sense, cognitive assemblages are not emerging alongside human institutions; they are progressively becoming the operational substrate of how societies observe, decide, and intervene in the world.
This transformation is driven less by technological novelty than by a profound shift in the structure of global complexity. The acceleration of ecological change, the volatility of interconnected markets, and the fragility of planetary-scale supply and information networks have rendered traditional models of control increasingly ineffective. Linear prediction, centralized command, and top-down planning cannot cope with environments characterized by cascading feedbacks, structural uncertainty, and non-linear propagation effects. Cognition in such contexts cannot be conceived as the execution of predefined rules; it must instead take the form of adaptive, distributed problem-solving. Cognitive assemblages respond to this challenge by weaving together human judgment, machine computation, and environmental signals into dynamic processes of situated reasoning. They embody an epistemic shift from the illusion of mastery to the pragmatics of continual adjustment. The various models we have seen, from bounded rationality to System 0, 1, or 2 thinking modes, can be of great help to design proper functions.
If cognitive assemblages offer promising pathways for addressing the turbulence of the twenty-first century, they also inaugurate a profound reconfiguration of governance. As decision-making becomes increasingly entangled with algorithmic mediation, the very notions of authority, accountability, legitimacy, and commons-based stewardship are put under tension. Governance can no longer be conceived exclusively as the deliberation of human agents; it must account for the distributed agency of sensors, models, infrastructures, and learning systems. This forces a reconsideration of the normative frameworks that have guided democratic institutions. As we have seen, values such as transparency, fairness, privacy, and autonomy, as well as mechanisms such as checks and balances, must now be reformulated within assemblages where boundaries between human and non-human actors are porous and continuously shifting.
At the same time, the risks associated with these emergent forms of cognition cannot be ignored. A fully connected society, operating through tightly coupled assemblages, becomes vulnerable to large-scale manipulation, infrastructural capture, and the weaponization of systemic dependencies. The very properties that enable rapid adaptation, interdependence, data integration, and continuous feedback can be exploited to produce coordinated influence, coercive nudging, or destabilizing perturbations. Ensuring that cognitive assemblages remain aligned with democratic values, and the preservation of sustainable paths, therefore, requires not only technical safeguards but also institutional innovation and new forms of civic oversight.
At a practical level, several governance principles can already be articulated. First, the design of cognitive assemblages should incorporate participatory mechanisms, allowing affected publics, domain experts, and institutional actors to intervene in the definition of objectives, constraints, and acceptable trade-offs, rather than treating algorithmic systems as purely technical artifacts. Second, responsible governance requires multi-layered oversight, combining technical audits, institutional checks, and public accountability, in order to detect cascading effects and prevent infrastructural capture. Third, assemblages should be designed for legibility and reversibility, with explicit mechanisms for contestation, human override, and gradual disengagement when unintended dynamics emerge. Finally, governance must operate across temporal scales, coupling fast technical adaptation with slower democratic deliberation and legal evolution. These principles do not constitute a universal blueprint, but they outline concrete directions for aligning distributed cognition with democratic values and long-term sustainability.
Cognitive assemblages should thus be understood as both an opportunity and a challenge. They offer a means to navigate environments that have exceeded the cognitive limits of individuals and centralized systems, enabling more responsive and resilient forms of collective intelligence. But they also demand a rethinking of governance, ethics, and security in a world where cognition is no longer confined to human minds. The future of cognitive assemblages will depend on how societies negotiate these tensions: whether they cultivate open, participatory, and accountable assemblages capable of supporting planetary stewardship, or whether they drift toward architectures of control that amplify fragility and erode democratic agency. The task ahead is not simply to design more powerful systems, but to shape the conditions under which emergent, distributed cognition can contribute to a sustainable and pluralistic future. Different regions might make different choices.