1. Introduction
Cultural transmission is a central process in the shaping of human societies, through which knowledge, values, and practices are reproduced, transformed, and shared within communities. This process entails a continuous symbolic negotiation in which collective imaginaries—those frameworks that organize social experience—are constantly reinterpreted and reshaped [
1,
2]. Throughout the history of human evolution, various technologies have mediated the processes of cultural transmission, expanding our possibilities for communication while also profoundly transforming the ways in which knowledge is produced, circulated, and legitimized [
3,
4,
5].
In the twenty-first century, the consolidation of cyberspace and the widespread implementation of artificial intelligence-based systems have given rise to a new sociotechnical environment. This environment substantially alters how cultures are socially transmitted and appropriated. This shift is displacing the centrality of traditional institutions—such as schools, families, and mass media—and relocating symbolic socialization within platforms governed by algorithmic architectures that automate the organization of knowledge, the ranking of content, and the production of visibility [
6,
7,
8].
Unlike previous media, these systems do not merely operate as technical intermediaries. They have evolved into emergent forms of non-human agency, actively intervening in cultural construction processes. Their operational logic—based on predictive modeling, large-scale data extraction, and the optimization of attention—is driven by commercial and geopolitical rationales that often sideline ethical, pedagogical, or epistemic considerations [
9,
10,
11]. This operational dynamic progressively undermines key pillars of democratic life, including symbolic diversity, cognitive autonomy, and—most alarmingly—public deliberation. The latter is particularly affected by the fragmentation of communicative space, the algorithmic personalization of information, and the narrowing of opportunities for engaging with difference, thereby weakening the exercise of a reflective and deliberative citizenship.
This scenario generates a constellation of interrelated challenges—political, legal, and educational—of which education becomes the central axis of this reflection. The educational field can no longer be limited to the integration of digital competencies into curricula. Rather, it must adopt critical pedagogical frameworks that equip individuals to understand, interrogate, and actively reconfigure the algorithmic logics that structure contemporary digital ecosystems. In this context, critical digital literacy should not be regarded as a mere instrumental skill, but rather as an epistemological and ethical condition for defending cultural pluralism, cultivating interpretative autonomy, and strengthening informed democratic citizenship [
12,
13,
14].
This conceptual paper, therefore, proposes a critical and interdisciplinary review of the transformations affecting cultural transmission in the context of cyberspace, with special emphasis on the role of AI algorithms as new cultural agents. From a sociotechnical perspective, we argue that algorithmic agency marks a historical turning point by introducing automated forms of symbolic intervention that redefine the conditions of visibility, legitimation, and participation in the public sphere. In response to this diagnosis, we advocate for a critical pedagogy that fosters awareness, resistance, and transformative action within the digital environments we inhabit. As a conceptual contribution, this paper draws on interdisciplinary literature from the sociology of technology, critical pedagogy, and digital culture studies to articulate a theoretical framework for understanding how algorithmic infrastructures mediate symbolic power in education and society. The central question guiding this inquiry is as follows: To what extent can AI-based algorithmic systems be considered cultural agents that influence processes of meaning-making, and in what ways do they reconfigure the conditions of possibility for critical educational practice and meaningful democratic participation in digitally mediated environments?
In today’s datafield educational landscape, where AI-driven tools increasingly mediate learning processes, the need for critical frameworks that interrogate the cultural implications of algorithmic systems has become urgent. While technical analyses abound, there remains a gap in conceptual scholarship that links algorithmic design to questions of symbolic authority, pedagogical agency, and cultural imagination. This paper seeks to address this gap by proposing a theoretical model that connects algorithmic agency with educational transformation.
To structure this inquiry, we propose a conceptual framework of Critical Algorithmic Mediation (CAM), which articulates algorithmic agency along three analytical axes: (1) infrastructural control, referring to the technical and political design of digital systems; (2) cultural imaginaries, which address how algorithmic logics shape symbolic meaning and perception; and (3) pedagogical resistance, which focuses on educational strategies to critically engage with and transform these conditions.
This framework aims to bridge sociotechnical critique with emancipatory pedagogical practice, providing a structured lens through which to interrogate the cultural politics of algorithmic systems.
2. The History of Cultural Transmission and Media
The history of cultural transmission is inseparable from the evolution of the media and technologies that have enabled societies to share, preserve, and transform knowledge, values, and ways of life. Each technological innovation that expanded the expressive capacities of human communities also shaped power structures and the conditions of access to, appropriation of, and legitimation of knowledge [
3,
4,
5]. Far from functioning as neutral channels of communication, media operate as sociotechnical matrices that configure thought, perception, and forms of social experience.
For millennia, orality was the predominant mode of cultural transmission. Through stories, songs, rituals, and embodied narratives, communities reproduced their knowledge without relying on permanent material support. This deeply situated and relational stage fostered a strong connection between word, body, and collective memory. However, it also entailed structural limitations: the need for physical co-presence, the fragility of memory, and the dependence on immediate contexts for the preservation of knowledge [
3,
15].
The emergence of writing marked a fundamental turning point in cultural transmission, enabling the fixation of knowledge and its dissociation from immediate time and space. According to Goody, this transformation introduced new forms of abstract thinking, cognitive rationalization, and social organization by turning knowledge into an object of accumulation, analysis, and control [
16].
The invention of the printing press in the fifteenth century exponentially increased textual reproduction and access to information, consolidating modern literate culture. It also contributed to the rise of Enlightenment rationalism, scientific thought, and modern individualism as dominant cultural forms in the West [
5].
In the twentieth century, the rise of mass media—radio, cinema, and television—expanded the reach of cultural transmission to national and global scales. These media consolidated vertical structures of dissemination, concentrating symbolic power in the hands of state and corporate actors. As McLuhan famously asserted, “the medium is the message,” emphasizing that what matters most is not the content that circulates, but how each communication technology transforms the perceptual, cognitive, and social structures of its users, shaping new forms of collective experience [
4].
By the late twentieth century, the expansion of the Internet and the emergence of cyberspace as a new symbolic environment marked another milestone in the history of communication. Cyberspace emerged as a technical and semiotic domain characterized by hypertextuality, networked connectivity, and decentralized knowledge flows [
17]. Initially, this digital ecosystem appeared to enable more horizontal and participatory forms of cultural exchange, broadening the range of voices present in the public sphere.
Early theorists described these dynamics through concepts such as convergence culture, collective intelligence, and participatory culture [
17,
18], which emphasized the role of users as active contributors in symbolic production. However, this optimistic vision would soon be challenged. Structural inequalities in access to digital tools, coupled with the increasing dominance of major technology corporations, began to reshape the landscape of digital culture.
This process led to the platformization of the internet—a transition from a decentralized, collaborative model to a more hierarchical ecosystem governed by algorithmic infrastructures. These developments, which will be examined in greater detail in the following section, reconfigured the conditions of access, visibility, and symbolic circulation, laying the groundwork for a new, automated form of cultural transmission.
3. Cyberspace and Digital Culture: From Participatory Optimism to Algorithmic Control
The expansion of cyberspace in the 1990s marked a profound disruption in the modes of cultural transmission. For the first time, humanity had access to a global communication environment that allowed for the production, circulation, and exchange of symbolic content without the mediation of centralized institutions. This new ecosystem was initially celebrated for its democratizing potential: it enabled decentralized access to knowledge, encouraged self-expression, and amplified the diversity of voices in the digital public sphere [
17,
18].
During this foundational phase, an optimistic vision of the digital environment gained traction. Pierre Lévy conceptualized cyberspace as a symbolic space of networked interaction and co-creation, introducing the notion of collective intelligence as a new mode of distributed cultural production [
17]. Henry Jenkins, in turn, coined the concepts of convergence culture and participatory culture to describe how users were becoming prosumers—simultaneously producers and consumers—actively contributing to the creation and dissemination of meaning [
18].
The rise of blogs, wikis, forums, and social media platforms in the early 21st century reinforced this reading, fostering a sociotechnical imaginary centered on horizontality, collaboration, and symbolic diversity. Authors such as Bruns introduced the concept of produsage to emphasize the breakdown of boundaries between cultural production and use [
19]. Emerging digital culture was thus interpreted as a space of creative reappropriation, open learning, and civic experimentation.
However, this enthusiasm was soon tempered by more critical perspectives. Jenkins himself warned of a participation gap, that is, the persistence of structural inequalities in access to the skills, resources, and conditions required for meaningful participation [
18]. Simultaneously, an economic model based on attention monetization began to take shape, wherein technology platforms transformed digital space into an environment governed by opaque algorithmic infrastructures [
8,
10].
This platformization process brought about a deep mutation in the regime of cultural mediation. Platforms such as Facebook, YouTube, TikTok, and Instagram emerged as dominant intermediaries in symbolic circulation, automating content curation through recommendation algorithms designed to maximize user retention, virality, and profitability [
6,
7]. This automated mediation radically reconfigures the conditions of visibility and recognition: what is displayed—and what remains hidden—is determined by algorithmic criteria operating under commercial logics, rather than pedagogical or democratic principles [
9].
As a result, digital culture ceased to be governed by horizontal and open dynamics and became a highly centralized ecosystem, where symbolic decisions are made by automated systems of classification and prediction. Gillespie describes this phenomenon as the construction of architectures of visibility, in which algorithms exert informational authority, determining which content is rendered relevant and which is marginalized or rendered invisible [
7].
Unlike traditional media, where curation was a deliberative and, in theory, publicly contestable practice, algorithms operate under a black-box logic that complicates accountability and limits public understanding of the mechanisms shaping cultural transmission [
11].
In this context, cyberspace has been transformed into a space governed by platforms that monopolize digital infrastructure and extract economic value from cultural interactions. This phenomenon has been conceptualized by Zuboff as surveillance capitalism, a logic in which personal data becomes raw material for predicting and modeling future behaviors [
10]. Algorithms are not neutral; they strategically produce content to capture attention, influence user decision-making, and optimize economic performance.
Moreover, personalized filtering and extreme segmentation of information have contributed to the fragmentation of the public sphere. The creation of filter bubbles [
20] and echo chambers [
21] impedes encounters with difference, reduces symbolic diversity, and weakens the conditions necessary for democratic deliberation. Instead of a shared public sphere, we witness the emergence of closed micro-spheres, algorithmically designed around consumption profiles, dominant emotions, and ideological predispositions. This emotionally engineered experience has been analyzed as a form of digital nihilism [
22].
Thus, we have shifted from a participatory cyberspace to an algorithmically managed environment that marks a decisive transformation in the history of cultural transmission. This mutation compels us to rethink contemporary notions of power, agency, and legitimacy in the cultural domain, as algorithms can no longer be understood as mere technical tools. Rather, they actively shape social imaginaries, guide the circulation of public narratives, and delimit the symbolic frameworks through which we interpret the world.
4. Algorithmic Mediation and Epistemic Impacts: A Conceptual Proposal
The transformation of the digital environment into a space governed by algorithmic architectures raises a fundamental question: can algorithms be considered agents in processes of symbolic production? Before addressing this question, it is important to distinguish between the broader category of algorithms—understood as rule-based procedures for processing information—and artificial intelligence systems, which involve adaptive, learning-based models capable of recognizing patterns, making predictions, and modifying their outputs in response to new data inputs. While all AI systems operate through algorithms, not all algorithms qualify as AI. This distinction is crucial for understanding the specific challenges that emerge when automated systems begin to intervene in cultural and educational domains. In what follows, unless otherwise specified, the term “algorithm” refers specifically to AI-based algorithmic systems—that is, adaptive and complex models capable of learning from data, identifying patterns, and producing culturally relevant outcomes without direct human oversight.
This conceptual clarification allows us to move beyond a purely technical view of algorithms and to interrogate their growing cultural significance. Traditionally, algorithms have been conceived as technical tools subordinated to human control. However, contemporary AI-based algorithms—complex, adaptive, and ubiquitous—challenge this instrumental perspective. Their capacity to learn, classify, filter, predict, and recommend content positions them as active actors within the digital cultural ecosystem [
7,
23].
From a constructivist and sociotechnical perspective, culture should be understood as a dynamic process of meaning-making and negotiation, expressed through symbols, narratives, and shared imaginaries [
2,
24]. Within this framework, algorithms are not neutral infrastructures but sociotechnical devices that actively participate in shaping what becomes culturally visible, relevant, and legitimate. In Latour’s terms, they function as non-human actants embedded in hybrid networks of distributed agency [
24].
The concept of algorithmic agency helps to capture this phenomenon. Unlike traditional forms of technical mediation—where artifacts operated under explicit human control—contemporary algorithms exhibit forms of operational autonomy shaped by their programmed objectives, training data, and user interactions. While they lack consciousness or moral intentionality, their functioning produces structuring effects in cultural environments by reconfiguring which information circulates, which voices are amplified, and which forms of knowledge are privileged. This form of agency is not equivalent to human will, but should be understood as a relational capacity emerging from interactions between technical systems, human inputs, and institutional goals. Following Seaver, Latour, and Suchman, algorithmic agency can be described as a distributed effect within sociotechnical assemblages, where meaning and decision-making are co-produced by humans and machines operating in complex feedback loops [
6,
24,
25,
26,
27].
This non-human agency manifests in a relational and distributed manner. As Seaver and Gillespie argue, algorithms must be understood as assembled systems incorporating human decisions, historical data, economic logics, and technical goals [
7,
23]. These experiences take shape as hybrid practices between bodies, code, and culture [
28]. Their agency results from multiple layers of mediation—design, training, implementation, and use. Consequently, their power lies in their capacity to reshape symbolic environments through repetitive, normalized, and opaque operational patterns. These usage habits reinforce algorithmic authority and its cultural naturalization [
29].
One of the most significant expressions of this agency is the algorithms’ role in cultural design [
9]. Traditionally, cultural design—the structuring of symbolic frameworks that organize collective meaning—was the responsibility of institutions such as schools, mass media, or communities of practice. In the current digital landscape, this role is increasingly assumed by algorithms that select, rank, and recommend content based on performance metrics such as sustained attention, virality, or conversion rates [
30,
31].
This form of design does not respond to democratic or ethical principles but to criteria of profitability and efficiency. Algorithms tend to privilege content that elicits intense emotions, reinforces cognitive biases, or appeals to dominant values. Algorithmic design embodies cultural, ideological, and situated decisions [
26], reinforcing a symbolic logic that is far from neutral. These dynamic fosters cultural homogenization and marginalizes dissonant symbolic expressions. Algorithms not only reflect pre-existing preferences but also amplify and stabilize them, reducing exposure to symbolic diversity [
30].
To advance the conceptual clarity of algorithmic agency, I propose a typology comprising three interrelated dimensions:
Structural agency, referring to the infrastructural role of algorithmic systems in shaping the architecture of digital environments and governing flows of information;
Operational agency, denoting the dynamic processes through which algorithms classify, filter, and personalize content in real time;
Symbolic agency, which captures the cultural effects of algorithmic curation in shaping collective imaginaries, value hierarchies, and affective economies.
This tripartite model enables a more granular understanding of how algorithmic systems intervene across technical, operational, and cultural layers of digital life. It allows us to analyze their role not only as mediators of information but as active agents in the design of symbolic environments.
Furthermore, advanced algorithmic logic introduces new forms of cultural normativity. By automating decisions about what deserves attention, algorithms intervene in the formation of identities, values, and truth criteria. This shaping capacity positions AI as a structuring force in contemporary socialization, capable of influencing how we perceive the world, construct alterity, and orient our desires [
6,
32]. Technology is not merely a tool, but an externalized memory that helps form subjectivity [
33].
It is important to emphasize, however, that acknowledging algorithmic agency does not imply embracing a naïve form of technodeterminism. As Feenberg has argued, all technologies are socially constructed and embody values, ideologies, and power relations [
8]. Algorithms, as sociotechnical artifacts, encapsulate the decisions of their designers, the intentions of the corporations that implement them, and the data that feeds them—data that often reproduces structural inequalities [
31,
34,
35].
In sum, algorithms have acquired an operative form of agency that positions them as new cultural designers. Their capacity to intervene in the architecture of knowledge and collective imaginaries poses pressing epistemological, social, and political challenges.
5. Algorithmic Agency and Symbolic Power: Epistemological and Social Challenges
The notion of algorithmic hegemony helps foreground the symbolic power of algorithmic systems to shape what counts as knowledge, authority, and legitimate discourse. Unlike purely instrumental views of technology, this concept insists on the epistemic, cultural, and affective functions of algorithmic infrastructures. Such systems do not merely operate at the level of efficiency or optimization, but participate in deeper regimes of meaning-making, influencing how subjects, identities, and truths are constructed and recognized. Unlike broader critiques of platform capitalism or surveillance capitalism that focus on economic extraction or behavioral manipulation [
10], the notion of algorithmic hegemony used here foregrounds the symbolic and epistemic dimensions of algorithmic design—specifically, how algorithms participate in the construction of meaning, credibility, and legitimacy within digital cultures.
While critiques of platform capitalism and surveillance capitalism have highlighted the economic and behavioral consequences of digital infrastructures, the concept of algorithmic hegemony developed here focuses instead on the symbolic and epistemic dimensions of algorithmic systems. These include how algorithms shape credibility, visibility, authority, and the construction of meaning within digital culture—functions often overlooked in economistic critiques.
Recognizing algorithmic agency entails acknowledging that algorithms do not merely mediate culture—they actively intervene in the production, circulation, and legitimation of meaning. This emerging role positions them as actors in the structuring of symbolic power, that is, the capacity to define what is considered true, valuable, or visible within a society [
32,
36]. In today’s digital environment, algorithms participate directly in the production of cultural hegemony, as they determine—through automated processes—which narratives gain centrality, which voices are amplified, and which are systematically marginalized or silenced.
This phenomenon can be conceptualized as algorithmic hegemony: a regime of visibility and symbolic legitimation operated by automated technical infrastructures and governed by corporate platforms. Unlike traditional forms of symbolic power—anchored in institutions such as schools, media, or academia—this hegemony functions opaquely, without effective mechanisms of accountability, and is driven by an economic logic centered on data extraction and profit maximization [
10,
11].
From an epistemological standpoint, the very conditions of knowing are radically reconfigured. Algorithms do not merely select the information to which we are exposed; they also shape interpretive frameworks that define what can be thought [
37]. They function as epistemic infrastructures [
7,
38]—devices that determine which forms of knowledge are granted central positions and which are relegated to the margins of the symbolic space. In this context, algorithms tend to favor fast, easily quantifiable, and emotionally impactful knowledge, to the detriment of slower, more complex, or counter-hegemonic forms of understanding.
This logic of acceleration and cognitive simplification fuels a technocultural reduction of knowledge [
33], in which all meaningful content must be translated into processable data, comparable metrics, and profitable predictions. Situated epistemologies, subaltern knowledge, and alternative narratives have limited space within such architectures—posing a direct threat to the right to symbolic diversity and to epistemic pluralism [
31,
39,
40].
At the social level, algorithmic agency reshapes the conditions under which citizenship is exercised. By automating content selection and personalizing informational experiences, algorithms contribute to the fragmentation of the public sphere. Phenomena such as echo chambers and filter bubbles [
20,
21] undermine the deliberative foundations of democracy by reducing exposure to difference and reinforcing pre-existing beliefs. As a result, citizenship is increasingly enacted in segmented, emotionally charged, and polarized environments.
Moreover, the concentration of algorithmic power in the hands of a few technology corporations raises serious concerns regarding cultural sovereignty. The rules that structure global symbolic flows are not defined through democratic deliberation but are shaped by opaque corporate and geopolitical logics. This renders cyberspace a privatized territory, where access to knowledge, symbolic representation, and cultural participation are conditioned by interests that diverge from the common good [
10,
32]. This control falls within a colonial epistemic logic that demands a decolonial critique of technical design [
41].
In light of this situation, merely demanding greater technical transparency or improved algorithmic governance is insufficient. It is essential to question the epistemic and political frameworks that underpin these systems. What does it mean to know when informational relevance is dictated by predictive models? What forms of subjectivity emerge when digital environments reinforce bias and suppress dissent? What spaces remain for critique, reflection, and collective action in an ecosystem governed by efficiency and performance?
Such questions compel us to reconsider the role of education in this new context. If algorithms are now central actors in meaning production, then critical literacy and civic education must include a deep understanding of how these systems function, the biases they encode, and the structural effects they produce.
6. Critical Pedagogy and Digital Literacy in Response to Algorithmic Power
In the face of the challenges posed by algorithmic agency—symbolic hegemony, epistemic erosion, and the fragmentation of the public sphere—education emerges as a strategic arena for developing forms of critical resistance and the reappropriation of the digital environment. A critical pedagogy, in this context, cannot be reduced to the mere technical integration of digital tools into educational practices. Rather, it requires a profound revision of educational aims, curricular priorities, and the methodologies that shape learning experiences. Such a revision must recognize that today’s educational environments are deeply conditioned by algorithmic infrastructures that automate criteria of relevance, rank knowledge forms, and shape cultural experience according to opaque and interest-driven logics [
14,
42].
From this perspective, we draw upon the theoretical and political legacy of critical pedagogy, particularly the contributions of Paulo Freire, who envisioned education as a transformative practice grounded in consciousness-raising, critical reading of the world, and emancipatory action [
12,
43]. In the current digital landscape, this vision must be expanded to include an understanding of the algorithmic architectures that mediate experience, filter information, and operate according to economic, political, and technocratic interests that often escape democratic oversight.
In this regard, a critical education in the age of artificial intelligence must embrace the development of algorithmic awareness—the capacity to understand how automated systems function, identify their operative logics, recognize the biases they encode, and analyze the symbolic, social, and cultural effects they generate in individuals and collectives [
13,
30]. Such awareness, however, cannot be confined to technical or instrumental competencies; it must be coupled with ethical sensitivity and a political perspective that interrogates the regimes of power embedded in these infrastructures and actively contests the meanings they help to shape. Within this framework, we propose five foundational dimensions for a critical digital literacy aimed at challenging algorithmic hegemony:
Deep Digital Literacy: This involves moving beyond an instrumental view of technology toward a critical understanding of algorithms, their training data, the commercial models that sustain them, and the biases they reproduce. It includes analyzing platform functionality, exploring zones of opacity (so-called “black boxes”), and deconstructing the criteria by which information is ranked and filtered [
11,
31].
Critical Thinking and Cognitive Autonomy: Education must cultivate the ability to analyze, contrast, and question information; identify dominant narratives; and resist mechanisms of emotional or ideological manipulation. Such critical thinking is essential to counteract phenomena like echo chambers, filter bubbles, and the proliferation of oversimplified discourses in digital spaces [
21,
44].
Cultural Diversity and Epistemic Justice: In a context where algorithms tend to privilege dominant, viral, and homogeneous content, it is urgent to foreground historically marginalized voices, knowledges, and languages in the curriculum. This dimension entails an active commitment to cultural pluralism and epistemic justice as cornerstones of a truly inclusive digital citizenship [
39,
45].
Technological Participation and Digital Creativity: Beyond platform consumption, students should be recognized as active agents in shaping the digital environment—as creators, coders, designers, and storytellers. Education should encourage critical content production, the use of open technologies, participatory design, and the development of competencies to engage with the sociotechnical processes that shape the world [
19,
42].
Digital Ethics and Political Participation: Addressing the ethical dilemmas posed by artificial intelligence—such as privacy, surveillance, algorithmic discrimination, opacity, and governance—is key to building a citizenry capable of deliberating, demanding transparency, and actively participating in the definition of digital normative frameworks. Ethics, in this sense, must be inseparable from political action [
13,
32].
These dimensions should not be viewed as isolated competencies or fragmented technical skills, but as interwoven core elements of a critical pedagogy aimed at contesting the dominant logics of the contemporary digital ecosystem [
13,
46,
47,
48,
49,
50]. This approach draws on the principles of critical pedagogy articulated by Giroux, who emphasizes the political nature of education and the importance of fostering civic agency in technologically mediated societies [
43]. It also resonates with Selwyn’s proposal for a reflective digital literacy that interrogates the sociotechnical arrangements behind educational technologies [
49].
The objective is not merely to adapt educational practices to the hegemonic structures of algorithmic infrastructures, but to critically interrogate their foundations, dismantle their naturalized assumptions, and articulate alternatives grounded in ethical, epistemic, and political principles aligned with the demands of a genuinely democratic citizenship. For instance, teachers might facilitate workshops where students analyze TikTok’s recommendation patterns to understand how symbolic agency operates in shaping aesthetic norms, emotional trends, and content virality. In higher education, critical examination of YouTube’s ranking algorithms can reveal the infrastructural logics of visibility and how they structure knowledge access. These activities can be aligned with discussions on algorithmic opacity and the social construction of relevance, encouraging learners to interrogate not only what they see, but why they see it. Such initiatives bridge theoretical reflection and practical engagement, reimagining the classroom as a space of symbolic production, collective deliberation, and political imagination—a site for enacting resistance, creativity, and meaningful participation that challenges the normalization of algorithmic control over culture and experience.
The pedagogical applications of critical digital literacy must also be tailored to specific educational contexts. In secondary education, students might examine how search engine algorithms rank political or historical information, comparing different queries and discussing potential biases. In higher education, interdisciplinary seminars can engage students in mapping the infrastructures of academic platforms, exploring how algorithmic profiling influences content visibility or even citation practices. In non-formal educational spaces, such as community centers or public libraries, digital literacy initiatives can empower marginalized populations to decode the symbolic effects of algorithmic personalization on their cultural consumption.
These interventions illustrate how critical pedagogy can be adapted to different audiences and settings, transforming educational spaces into laboratories of symbolic reflection and sociotechnical awareness.
This vision echoes Paulo Freire’s notion of education as a practice of freedom, where learners are encouraged to critically examine and transform the social conditions that shape their lives [
12]. It also builds on the tradition of media literacy education developed by scholars such as David Buckingham and Sonia Livingstone, who emphasize the need for students to analyze, question, and reframe the media environments they inhabit [
47,
48]. More recently, Neil Selwyn and Cristóbal Cobo have extended these frameworks to the digital realm, advocating for pedagogical models that go beyond instrumental digital skills to encompass reflexive, ethical, and political dimensions of digital engagement [
49,
50].
7. Conceptual Tensions and Theoretical Limitations
While the concept of algorithmic agency provides a valuable lens for examining the cultural and political operations of digital systems, it is not without tensions. One concern relates to the risk of technological determinism: emphasizing the agency of algorithms can inadvertently downplay the role of human actors, institutional agendas, and design intentions that shape these systems. This critique reminds us that agency must not be mistaken for autonomy; algorithms act within parameters established by humans, corporations, and states, often serving normative logics of efficiency, prediction, or control.
A second conceptual tension involves the reification of agency. Some scholars caution against attributing excessive intentionality to algorithms, which may obscure the distributed and emergent nature of sociotechnical action. Rather than treating algorithms as discrete agents, it is more accurate to understand agency as enacted across networks of designers, users, data infrastructures, and interpretive communities. This view aligns with relational ontologies and constructivist perspectives, but it also complicates normative critiques by dispersing responsibility across multiple actors and systems.
A further tension concerns the ambivalence of algorithmic agency itself. While much critical literature emphasizes the role of algorithms in reinforcing surveillance, bias, and symbolic domination, some contexts demonstrate their emancipatory potential. Activist groups and alternative media have used algorithmic tools to amplify marginalized voices, bypass traditional gatekeepers, or expose institutional wrongdoing. Such cases complicate a purely critical stance and invite a more nuanced, context-sensitive approach to understanding algorithmic effects.
Finally, there are tensions within the critical literature itself. Sociotechnical theories emphasize hybridity and relationality, while critical pedagogy often relies on structural accounts of domination and ideological critique. Bridging these traditions requires a careful balancing act: acknowledging the fluidity and distributed nature of agency without surrendering the capacity for political critique and ethical judgment. This paper attempts to navigate that tension by articulating a concept of algorithmic agency that is neither reductively mechanistic nor naively celebratory, but instead attentive to both its constraints and its affordances for educational transformation.
8. Conclusions
Throughout this paper, we have argued that artificial intelligence—and particularly complex algorithms—has acquired an operative form of agency that positions these systems as significant actors in contemporary processes of cultural transmission. This agency is neither conscious nor moral in the human sense, yet it is profoundly effective: it learns, selects, prioritizes, and structures the symbolic flow that circulates through cyberspace, silently shaping the interpretive frameworks through which reality is understood [
7,
23].
This transformation marks a historical turning point, as it introduces non-human agents into domains traditionally reserved for individuals, institutions, or communities. As a result, algorithms do not merely reflect the world; they actively configure it according to logics of efficiency, profitability, and control—logics that directly impact cultural diversity, cognitive autonomy, and democratic deliberation [
10,
33].
We have contended that algorithmic agency does not operate in a vacuum, but rather within sociotechnical regimes shaped by power relations, corporate interests, and invisible infrastructures. From this perspective, algorithms must be understood as sociopolitical artifacts that participate in the production of cultural hegemony—reinforcing biases, homogenizing imaginaries, and displacing alternative knowledges [
9,
31].
In response to this diagnosis, we have advocated for the urgency of a critical digital literacy that goes beyond the functional use of technology. What is needed is a pedagogy that enables individuals to denaturalize algorithmic infrastructures, understand their structuring effects, and develop the capacity to ethically intervene in the design, regulation, and appropriation of digital environments. This literacy must serve an educational project grounded in epistemic justice, cultural pluralism, and democratic participation.
Within this framework, we propose four key areas for research and intervention:
Reframe the concept of agency from a relational and non-anthropocentric perspective that allows for the recognition of new forms of cultural production and co-production between humans and machines;
Problematize algorithmic mediation as a political dispute over the design of culture and meaning;
Design transformative pedagogical models that integrate critical thinking, ethics, and technological creativity as transversal axes of civic education;
Promote regulatory frameworks and public policies that acknowledge the strategic dimension of symbolic flows in cyberspace, ensuring equitable conditions for the access, production, and circulation of meaning, as well as the full exercise of cultural and epistemic rights by communities.
These lines of research are not intended to close the debate, but rather to open horizons for reflection and action in response to one of the most pressing challenges of our time. The future of cultural transmission in the algorithmic age is not predetermined; it oscillates between two antagonistic directions: one oriented toward the automation of meaning for private interests, and the other toward the critical, collective, and democratic reappropriation of knowledge and culture.
While this analysis focuses primarily on the digital ecosystems of the Global North, it acknowledges the diversity of sociotechnical realities across regions. Further research should investigate how algorithmic infrastructures shape education and cultural production in underrepresented and marginalized contexts, where technological mediation intersects with structural inequalities and epistemic exclusion.
This bifurcation will not be resolved by technology itself, but by our collective capacity to understand its implications, to deliberate openly, and to exercise agency over the infrastructures that shape the global imaginary. By framing algorithms as cultural agents and education as a space of symbolic resistance, this paper contributes to laying the conceptual groundwork for rethinking critical pedagogy in an algorithmically governed society.
Ultimately, it calls for an urgent pedagogical, political, and epistemological engagement with the cultural power of AI, and for the construction of democratic alternatives that allow us to reclaim meaning, imagination, and the future as collective goods.