1. Introduction
In higher education (HE) today, few learning situations remain untouched by generative artificial intelligence (AI). A student preparing for a seminar may rely on ChatGPT, receive algorithmically generated feedback on an assignment, or follow a learning pathway curated by platform recommendations (
Hummel & Donner, 2023). Such systems do not simply support learning; they intervene in what becomes visible, which arguments are recognized, and how academic discussions unfold (
Bokelmann, 2023;
Hummel, 2021). In this way, they reshape fundamental processes of orientation and judgment that are not only cognitive but also pedagogical and democratic, as they determine how plurality, critique, and participation can still be cultivated under digital conditions (
Richter, 2023).
The societal implications of generative AI are already the subject of extensive debate. Recent assessments warn of the distortion of public discourse, the proliferation of synthetic media, and the concentration of power in global technology providers (
Nentwich et al., 2025). Critical research in education and media studies underscores that algorithmic infrastructures do not merely reflect existing practices but actively generate categories and conditions of recognition (
Amoore, 2020;
Couldry & Mejias, 2019;
Selwyn, 2019). These dynamics, often analyzed in the domains of politics and governance, find concrete parallels in HE: narrowing of debate, opaque structuring of learning trajectories, and growing institutional dependencies on external providers (
Knox, 2020;
Williamson, 2017). At the same time, these dynamics intersect with the fragile, biographically embedded transitions of students who negotiate entry into study programs, shifting orientations, and the uncertainties of longer educational trajectories (
Alheit, 1992,
1993;
Marotzki, 1990). HE is therefore not only an institutional field but also a biographical space where processes of formation are structured by algorithmic conditions. Universities thus occupy a hinge position: they are simultaneously exposed to the systemic risks of generative AI and tasked with cultivating the reflexive and democratic capacities needed to confront them. This dual role becomes particularly evident in teaching contexts where AI systems no longer remain in the background but co-determine what is taught, discussed, and assessed. The challenge is therefore not only to integrate new technologies but to ask how HE can continue to serve as a space in which political thinking, deliberation, and socially reflected action are possible under algorithmic conditions. The idea that democracy requires education has a long tradition in political and educational thought.
Negt (
2004) emphasizes that democracy is the only form of government that must be actively learned and appropriated by its citizens, since it depends on their capacity to participate, to judge, and to act responsibly. In this sense, democracy is not a self-sustaining societal order but a living practice that must be continuously renewed through education and collective experience.
Himmelmann (
2007) builds on this view by stressing that democratic life relies on people who, through educational engagement and shared practice, sustain and reshape its foundations in ever-changing conditions. These demands become even more pressing in view of diversity and intersectionality. Algorithmic infrastructures do not affect all students equally but interact with patterns of national-ethno-cultural belonging, gendered experiences, and social inequality (
Bourdieu, 1987;
Fraser, 2003;
Mecheril, 2003;
Mecheril & Seukwa, 2006). Recognition and visibility are unevenly distributed, and digital mediation can both reinforce and destabilize these asymmetries. Such dynamics recall the problem of epistemic injustice, where credibility and interpretive authority are unequally assigned, leaving some voices systematically marginalized or unheard (
Fricker, 2007). Addressing democratic education in the digital condition therefore calls for an approach that links institutional and pedagogical analysis with biographical orientations and questions of justice. Universities are thus challenged to sustain spaces of democratic formation where algorithmic mediation becomes a subject of critical reflection rather than an unseen structuring force. This article responds to that challenge by combining an education-theoretical perspective with empirical analysis. Conceptually, it connects debates on AI and democracy with pedagogical traditions of reflexivity and subject formation, acknowledging the entanglement of democracy theory and democracy education (
Berkemeyer & May, 2023). Empirically, it examines how students and lecturers interpret and navigate algorithmic mediation in their everyday practices.
The central analytical focus of this study is to examine how the interpretations and orientations articulated by students and lecturers reflect the ambivalences of algorithmic mediation while at the same time indicating how democratic education can be sustained under digitally reconfigured conditions of learning and teaching. Against this background, the study is guided by the following research question: How do students and lecturers in HE interpret and navigate the presence of generative AI in their academic practices, and what do these orientations reveal about the conditions and possibilities for sustaining democratic education under algorithmic mediation? To address this question, the next section develops the theoretical framework that situates Bildung, biography, democratic education, and algorithmic mediation as the conceptual foundations of the study.
2. Theoretical Background—Education-Theoretical Perspectives on Democracy Under Generative AI
Democratic education under digital conditions requires a theoretical framework that reaches beyond the transmission of competencies. At its core, education can be understood as a reflexive process of Bildung
1, in which subjective orientations, institutional practices, and societal structures interact. Dewey’s conception of democracy as a way of life remains a crucial reference point. He emphasizes that education is not only a means of acquiring knowledge but a collective practice through which citizens learn to negotiate shared problems (
Dewey, 2011;
Gutmann, 1999). In this tradition, democratic education is not conceived as the achievement of a fixed end state but unfolds as a dynamic and fragile process in which individual self-assurance, societal participation, and technological mediation intersect (
Fauser, 2022;
Habermas, 1992). Klafki’s concept of the capacity for responsible self-determination
2 frames this process as the cultivation of orientation in contexts of uncertainty, while Biesta’s triad of qualification, socialization, and subjectification clarifies that education encompasses multiple and sometimes conflicting functions (
Biesta, 2013). Under algorithmic conditions, these functions are reconfigured: qualification through automated assessment, socialization through platform-mediated participation, and subjectification through encounters with opaque infrastructures.
Biesta (
2013) also stresses that democracy involves exposure to the unpredictable and the unfamiliar.
Rancière (
1991) deepens this perspective by showing that democracy is enacted through disruption, contestation, and the insistence on equality. While Habermas grounds democratic education in communicative rationality and the pursuit of consensus, Rancière emphasizes disruption and radical equality. Read together, these perspectives reveal a field of tension between consensus and dissensus that becomes especially pronounced in digital settings, where infrastructures mediate both visibility and forms of participation. From this perspective, democratic education can be understood as cultivating epistemic friction, where encounters with opacity, disagreement, and disruption become productive moments for reflexivity and collective learning (
Medina, 2013).
This democratic horizon also involves a biographical perspective. Educational trajectories are not linear but marked by transitions, ruptures, and contingencies. Alheit’s notion of
Biographizität3 conceptualizes learning as the ongoing negotiation of self and world across discontinuities and turning points (
Alheit, 1992,
1993,
2003b).
Marotzki (
1990) emphasizes that Bildung emerges in processes where individuals reconfigure their self-relations and interpretive horizons in response to societal challenges. Biographical research has repeatedly shown that educational processes unfold as dialogical negotiations between subjective meaning-making and institutional frameworks (
Dausien & Kelle, 2005;
Egger, 1995,
2006). Comparable insights appear in international adult learning theory, where
Merriam et al. (
2007) stress the embeddedness of learning in the broader life course, and in
Marsick and Watkins’s (
1992) work on informal learning, which highlights the contingent and unplanned character of formative experiences. Under algorithmic conditions, these negotiations are refracted through infrastructures that filter recognition, accelerate decision-making, and impose classificatory logics. What Alheit described as fragile and transitory processes connects closely with international debates on biographical learning (
Tedder & Biesta, 2009;
West et al., 2007) and can be related to
Mezirow’s (
2000) concept of transformative learning as a reorientation of meaning perspectives. These perspectives situate biographical research within a wider international discourse on formation and reorientation, highlighting how individuals negotiate continuity and change under disruptive conditions. Democratic education is also closely linked to diversity and intersectionality. (
Mecheril, 2003;
Mecheril & Seukwa, 2006) shows how natio-ethno-cultural belongings shape educational experiences, while
Dausien (
2004) highlights the entanglement of gender and biography.
Fraser’s (
2003) dual focus on redistribution and recognition indicates why democracy involves not only access but also symbolic orders that determine whose voices are heard.
Bourdieu’s (
1987) analysis of habitus and distinction illustrates how educational fields reproduce hierarchies that may be reinforced by digital mediation. Critical race theory provides further insights into how systemic inequities shape educational trajectories, while
Ahmed (
2012) demonstrates how institutional cultures of diversity can simultaneously recognize and marginalize minority voices. Together, these perspectives describe HE as a site where biographies, inequalities, and identities intersect and where algorithmic infrastructures may stabilize or unsettle these dynamics.
With the rise of generative AI, these dynamics acquire additional layers of complexity that extend beyond technical questions. Algorithmically mediated platforms structure visibility, prioritize information, and channel participation in ways that are not neutral but embedded in social and cultural contexts (
Amoore, 2020;
Helsper, 2021;
Selwyn, 2019).
Knox (
2020) shows that HE is increasingly shaped by algorithmic infrastructures that mediate access to knowledge while preconfiguring futures of learning, often privileging efficiency and prediction over reflexivity and dialogue. In practice, personalization, feedback systems, and gamification introduce subtle yet pervasive forms of guidance that shape how students orient themselves in learning environments (
Hummel et al., in press). Many of these processes remain opaque, producing what
Busch (
2016) describes as a normalization of algorithmic influence.
Pasquale’s (
2015) analysis of the ‘black box society’ highlights how algorithmic decision-making often eludes scrutiny, raising questions of accountability that affect educational legitimacy. At a societal level, these dynamics mirror broader risks such as discursive narrowing and dependency on global infrastructures (
Nentwich et al., 2025). As
Couldry and Mejias (
2019) argue, such developments culminate in a form of data extractivism that transforms participation itself into a resource to be appropriated and monetized. What appears as engagement and inclusion thus simultaneously reproduces relations of dependency, revealing how the logics of data capitalism extend into the very practices through which knowledge and education are produced. This perspective highlights how algorithmic mediation not only organizes knowledge flows but also implicates universities in larger structures of power and inequality.
From a pedagogical perspective, these insights align with longstanding debates on democracy education.
Himmelmann (
2001) conceives democratic learning as fostering the capacity for judgment and the ability to engage constructively with plurality and conflict.
Kenner and Lange (
2020) further argue that such learning depends on institutional and pedagogical conditions that render participation both possible and meaningful as a lived and reflective practice. In HE, algorithmic infrastructures intervene in both dimensions by reshaping the conditions under which judgment and participation can unfold. They influence how recognition, authority, and accountability are distributed, thereby transforming the spaces in which democratic dispositions are formed and sustained. The pedagogical challenge lies in rendering algorithmic infrastructures visible and discussable, enabling students and lecturers to critically interrogate their operations. This connects to traditions of critical pedagogy (
Brookfield, 1987;
Freire, 1970;
Giroux, 2011;
Hooks, 1994), which highlight the importance of addressing power, participation, and reflexivity in educational practice.
These discourses form a shared horizon rather than a catalogue of separate traditions. Democratic education under generative AI is not reducible to technical adaptation or the delivery of competencies but appears as a fragile interplay of Bildung, biography, and mediation, in which Bildung theory emphasizes orientation in situations of uncertainty where judgment remains provisional and contested, biographical perspectives show that such orientations are embedded in life histories marked by transitions, inequalities, and intersecting identities, and analyses of algorithmic mediation expose how infrastructures intervene by structuring recognition, redistributing authority, and delimiting participation. Seen together, these strands clarify how democratic education in HE unfolds as cultivating dispositions of reflexivity, critique, and participation—dispositions that are biographically situated, attentive to diversity, and responsive to the conditions of digital mediation. To situate these perspectives within an integrated conceptual lens, this article draws on the DEA Framework (Democratic Education under Algorithmic Conditions) previously introduced by the author (
Hummel, 2025). The DEA model conceptualizes HE as a triadic field constituted by Education, Democracy, and Algorithmic Conditions. At the first corner, Education (in the sense of Bildung) denotes processes of qualification, socialisation, and subjectification in Biesta’s sense, with a particular emphasis on the cultivation of judgment, orientation, and responsible self-determination under conditions of uncertainty. At the second corner, Democracy refers to participation, recognition, and contestation, drawing on Dewey’s idea of democracy as a way of life, Habermasian notions of deliberation, and Rancièrian emphases on disruption and equality. At the third corner, Algorithmic Conditions capture the socio-technical infrastructures that filter visibility, classify subjects, and preconfigure futures of learning, including platform architectures, generative AI systems, and data-driven governance. The DEA framework (see
Figure 1) does not treat these three corners as separate domains, but as a relational field in which subject-formation is continuously shaped by competing claims of efficiency, legitimacy, and recognition. Within this triangular space, pedagogical practices, democratic aspirations, and algorithmic operations intersect and sometimes collide. Recognition can be redistributed from dialogical encounters into infrastructures; responsibility can shift from professional judgment toward automated procedures; legitimacy can be negotiated between institutional norms, civic expectations, and opaque system outputs. In this sense, the DEA model provides a mid-level theoretical structure that makes visible how Bildung, democracy, and digital governance are co-implicated in HE.
In the present study, the DEA framework serves as the conceptual architecture that links the theoretical background to the empirical reconstruction of orientations. The three corners of the model orient the analysis toward educational-theoretical questions of formation and judgment, democratic questions of participation and recognition, and structural questions of algorithmic mediation and governance. The five reconstructed orientations are thus not interpreted in isolation but situated within this triangular field, as empirical condensations of how students and lecturers navigate tensions between Bildung, democracy, and algorithmic conditions in their academic lives. In subsequent sections, the DEA framework informs both the coding of articulations and the synthesis of findings into three analytical axes that structure the study’s educational and pedagogical implications. In this way, the model does not stand alongside the empirical analysis but underpins it, providing a coherent lens through which the biographical, democratic, and infrastructural dimensions of generative AI in HE can be read together.
4. Results of the Empirical Study
Through the combination of Grounded Theory coding and biographical interpretation, the analysis reconstructed not just thematic clusters of statements but deeper orientations that shape how students and lecturers engage with generative AI. In line with biographical research (
Alheit, 1992;
Marotzki, 1990), orientations are understood as patterned ways of coping, interpreting and positioning that are embedded in longer educational trajectories rather than isolated practices. These orientations crystallized across the written articulations as responses to the institutional contradictions of HE and are marked by distinct biographical, pedagogical and democratic implications. In the following, five such orientations are presented, each grounded in excerpts from the written responses and elaborated through dialogue with education theory and democratic pedagogy. They are not conceived as fixed empirical themes but as interpretative condensations, developed through open, axial and selective coding. The orientations capture both immediate practices and broader processes of subjectivation under algorithmic conditions. They span different levels, from pragmatic strategies of coping with workload to struggles over recognition and authority, reflecting the multi-layered ways in which academic learning is reconfigured. Each orientation is empirically reconstructed and analytically enriched through educational-theoretical interpretation, ensuring that the results remain faithful to participants’ articulations while highlighting their wider significance for HE.
4.1. Pragmatic Orientation: Coping with Workload
The first orientation reconstructed from the written responses centers on how students and lecturers pragmatically cope with the workload and accelerating demands of university life, using generative AI primarily as a means to intensify existing strategies of dealing with pressure. One student respondent (S114) wrote:
“Even before AI, I often skipped readings or just skimmed them. Now I put the text into ChatGPT and get a summary, it feels like cheating time.” This articulation shows how familiar coping practices persist but take on new shape through technological mediation. At the same time, it captures the vulnerability of early study phases, when students seek stability and belonging while navigating new academic demands (
von Felden, 2006). Another participant (S22) admitted:
“I used to double-check facts myself, but with ChatGPT I just take the answer. Sometimes I notice I rely on it too much.” Here dependency does not appear as fascination with AI but as reliance on its usefulness in coping with overload. Such articulations can indicate biographical turning points, where routines are momentarily destabilized and the possibility of reflexive reorientation emerges (
Dausien, 2004;
Schütze, 1984). A lecturer respondent (L9) confirmed this tendency from a teaching perspective, noting that students increasingly turn directly to AI tools, which creates habitual patterns in which reflection becomes secondary (
Hummel & Donner, 2023).
Across these articulations, three dimensions can be distinguished. First, in moments of heightened stress such as exam preparation or assignment deadlines, AI functions as a situational resource for saving time. Second, students describe epistemic shortcuts, where system outputs are accepted without verification and reliance shifts from interpretive judgment to statistical regularities. Third, repeated use normalizes this reliance, embedding AI as an invisible part of academic practice. These tendencies continue earlier strategies of coping with institutional pressure, such as delegating tasks or routinizing learning steps (
Alheit & Hoerning, 1989). Outsourcing entire readings through AI-generated summaries reflects such sedimentation of shortcuts into normalized routines. By contrast, articulations that express unease illustrate how routines can be interrupted and turned into occasions for self-reflexivity. These differences reveal how coping is embedded in biographical trajectories shaped by study entry, disciplinary socialization and future projects (
Alheit, 1993,
2003a;
Marotzki, 1990). They also intersect with habitus and cultural capital: while some students mobilize AI strategically as an additional resource, others risk dependency when their biographical and disciplinary resources are less robust (
Alheit, 2008;
Bourdieu, 1987). Lecturers’ observations further show how disciplinary traditions mediate the reception of digital tools, with hermeneutic cultures of justification being particularly vulnerable to algorithmic acceleration. A temporal dimension sharpens this interpretation. The acceleration enabled by AI-generated outputs reflects coping practices situated within broader societal dynamics of acceleration, where students attempt to stabilize themselves within compressed schedules (
Rosa, 2024), while resonance is simultaneously undermined as efficiency displaces dialogical engagement with knowledge. From a pedagogical perspective, this orientation reveals structural tensions. Articulations that describe ‘cheating time’ and concerns about displaced reflection highlight the risk of narrowing academic learning to instrumental rationality. What is lost is not only reflection but also the relational work at the core of teaching, namely the pedagogical labor of building dialogical relationships between teachers and students (
Szczyrba, 2009;
Wedekind, 1986). Pragmatic coping may bring temporary relief by allowing students to redistribute energies, yet it simultaneously contracts the horizon of Bildung, which entails cultivating the capacity for responsible self-determination (
Klafki, 1996) and responding to the educational call into responsibility (
Biesta, 2006). Still, moments of hesitation articulated in the responses demonstrate how irritation can serve as a transformative occasion that destabilizes routines and reopens reflexive engagement (
Koller, 2012). The democratic implications extend beyond the classroom. When pragmatic coping becomes normalized, dialogical spaces for contestation and plural interpretation shrink as instantaneous answers replace deliberation and reduce opportunities for participation. At the same time, coping is not equally distributed. Students with greater cultural capital or stronger biographical resources may use AI strategically, while others risk dependency. This inequality reflects struggles over recognition and redistribution, where infrastructures silently privilege some groups while disadvantaging others (
Fraser, 2003;
Honneth, 2012). Pragmatic coping should therefore not be dismissed as a mere deficiency but understood as a crystallization of biographical fragility, institutional contradiction and systemic acceleration. The pedagogical task lies in treating such orientations as occasions for reflexive engagement, where the tensions between efficiency and reflection, delegation and autonomy and algorithmic mediation and Bildung become visible and discussable. In this sense, coping practices are not only risks for democratic education but also potential entry points for cultivating judgment, sustaining relational engagement and maintaining spaces of participation in HE.
4.2. Adaptive Orientation: Learning Under Opacity
A second educational orientation reconstructed from the written responses can be described as an adaptive orientation toward opacity. Respondents repeatedly emphasized their limited insight into how generative AI systems operate and expressed little willingness to engage with this complexity. One student (S27) wrote: “Already in school I never wanted to go too deep into how technologies work. Now at university it is the same. I honestly do not want to know what happens in the background of ChatGPT. It is too complicated, and as long as it works for me, that is enough.” This articulation illustrates a biographical continuity of orientation: what was rejected in earlier educational phases is continued in HE. Rather than signalling indifference, this stance reflects a protective strategy that secures stability by sidestepping complexity. Another student response (S41) stated: “In my studies I have often learned to live with uncertainty. With AI it is similar. I have the feeling that it influences me more than I notice. But you cannot really see it, so you just continue using it.” Here opacity is acknowledged, yet the statement reflects a broader adaptation process in which opacity is normalized as part of everyday study routines.
Lecturer responses (L5, L16) expressed similar attitudes, noting that AI had been integrated into teaching practices even though its internal logic remained unclear. For some, adaptation took the form of pragmatic withdrawal, described as focusing on other tasks rather than questioning the system in depth. These articulations suggest that adaptive orientations are not confined to students. Faculty members, confronted with competing institutional demands, may also develop strategies of disengagement that function as forms of protective non-knowledge. The recurrent expression that it is
“enough” if the system works illustrates how educational orientation can contract into a purely functional stance. In
Klafki’s (
1996) terms, the demand for responsible self-determination becomes muted when confrontation with societal challenges is bypassed.
Biesta’s (
2006) notion of being summoned into responsibility provides a useful lens here, yet in these responses this summons appears softened or deferred by the pragmatic need for stability. Some articulations also expressed unease, such as the awareness of being influenced by AI
“more than I notice.” Such moments indicate irritation in the sense described by
Koller (
2012), where habitual routines lose their reliability and the potential for reflexive reorientation emerges. When lecturers noted that they had
“stopped asking how AI works” the analysis shows how decision-making authority subtly migrates into infrastructures.
Fricker’s (
2007) concept of epistemic injustice clarifies how such withdrawal risks producing uneven participation, as the credibility of human judgment is displaced by opaque system outputs.
Honneth’s (
1996) analysis of recognition further explains why this shift is not merely technical but normative: recognition is undermined when dialogical justification is replaced by infrastructural authority. The responses also highlighted that opacity extends beyond the technical operations of AI to include the implicit structures of academic life. Jackson’s analysis of the hidden curriculum (
Jackson, 1968) helps make visible how unspoken expectations govern engagement with AI: what respondents withdraw from is less machine learning logic and more the contradictory institutional environment in which AI is at once prohibited and encouraged, demanded and disavowed. This produces a mode of non-knowledge that brings short-term relief while reproducing uncertainty.
From the perspective of tacit knowledge, the responses offer an additional nuance.
Polanyi (
1966) described tacit knowledge as the background resource that supports skilled action, yet what becomes visible here is not tacit mastery but a form of tacit non-knowledge. Rather than drawing on stable interpretive schemes, respondents rely on functional routines that secure temporary orientation without deeper understanding. This can be read as a fragile coping strategy in light of institutional contradictions. Alheit and Dausien’s work on biographical knowledge (
Alheit, 1993,
2003a;
Dausien, 2004) clarifies how such fragile routines may be biographically motivated attempts to maintain continuity. Egger’s analysis of the tension between institutional expectations and individual meaning-making (
Egger, 1995,
2006) helps interpret these orientations as situated between personal biography and systemic pressure. These dynamics are also linked to social differentiation. Drawing on Bourdieu’s concept of habitus (
Bourdieu, 1987), the responses suggest that students with more robust institutional confidence experience opacity as unproblematic, while others with more fragile biographical resources perceive disengagement as their only viable strategy. These articulations reveal how recognition, belonging and legitimacy are unevenly distributed when the hidden curriculum of AI use remains contradictory and opaque.
From the standpoint of HE didactics, the findings reflect Reinmann’s analysis of converging institutional imperatives of efficiency, digitalization, competence orientation and research excellence (
Reinmann, 2025). These conditions generate pressures in which disengagement becomes an attractive form of relief. Reinmann distinguishes between educational quality as the normative horizon of Bildung and learning effectiveness as the measurable output of instructional design (
Reinmann, 2016). When respondents state that it is sufficient for the system to function, effectiveness is implicitly privileged over quality, narrowing education to immediate usability while sidelining dialogical and critical dimensions. Building on education theory, opacity must not be treated only as a deficit but also as a pedagogical challenge. Klafki emphasizes that Bildung requires confronting societal contradictions rather than bypassing them, while Biesta insists that education involves being summoned into responsibility. Moments of irritation, as articulated in the responses, indicate that this call can re-emerge even within adaptive routines. From a democratic perspective, the adaptive orientation reveals that opacity produces uneven conditions for participation. Only some respondents articulate strategies to navigate complexity, while others withdraw. Fricker’s concept of epistemic injustice clarifies how this withdrawal threatens equal participation, while Fraser’s reminder that recognition and redistribution are interlinked highlights that opacity redistributes symbolic legitimacy as much as access to knowledge. The adaptive orientation therefore reveals opacity not only as a coping strategy but as a pedagogical site that demands new forms of educational response. HE cannot restrict itself to providing factual knowledge about AI but must cultivate reflexive capacities that enable students and lecturers to treat opacity as a shared condition of academic life. Following Reinmann’s call for a humane university, the task lies in acknowledging adaptive orientations without dismissing them as deficiencies while transforming them into occasions for critical judgment, reflexivity and democratic participation.
4.3. Relational Orientation: Authority and Resonance
A third educational orientation reconstructed from the written responses can be described as relational, as it revolves around how evaluative authority and affective resonance shift when generative AI becomes part of academic routines. What emerges is not a simple replacement of teacher judgment with automated scoring but a redistribution of credibility, responsibility and recognition across institutional and infrastructural contexts, often intensified by the accelerating pressures of workload in HE (
Rosa, 2024).
One lecturer (L12) stated: “
In the past I always insisted on discussing feedback with students, but now I notice that some trust the feedback they receive from the system more than mine. They even cite it as if it were unquestionable.” Placed against a longer teaching biography shaped by hermeneutic traditions, this articulation illustrates how algorithmic promises of objectivity can erode dialogical justification. What is experienced here as a loss of resonance in pedagogical interaction points to a redistribution of evaluative authority under institutional conditions where speed and efficiency dominate (
Reinmann, 2016). The tendency to treat system-generated feedback as unquestionable signals a transfer of legitimacy into infrastructures, exemplifying what may be called transferred authority, in which the space for dialogical negotiation narrows. Another lecturer (L7) articulated a similar tension: “
I sometimes rely on automated scoring because it helps me appear consistent and objective, especially under pressure. But at the same time, I feel I have surrendered part of my judgment.” Rather than signalling individual weakness, this response makes visible the structural contradictions of HE, where institutional imperatives of fairness and efficiency push lecturers toward risk-averse strategies. This reflects what
Egger (
1995,
2006) describes as securing objectivity through procedural stability, even when it narrows the formative horizon of assessment (
Wildt, 2013). Viewed biographically, the feeling of having ‘surrendered judgment’ resembles what
Schütze (
1984) conceptualizes as a biographical turning point, momentarily unsettling routines and opening space for reflexive reorientation (
Dausien, 2004).
From the student perspective, one response (S18) noted: “
In school I always depended on teachers’ evaluations. Now with AI it is similar. If the system says my essay is fine, I assume it is fine. I do not question it further.” This reliance on algorithmic validation exemplifies a credibility allocation dynamic, in which epistemic legitimacy is more readily conferred on algorithmic outputs than on human judgment. Biographically, this extends earlier experiences of externalized authority from school to digital infrastructures in HE (
Alheit & Hoerning, 1989). Yet the explicit remark “
I do not question it further” signals irritation that unsettles habitual patterns, pointing to reflexive potential. A lecturer with a migration-marked biography (L3) added: “
When I was a student, I often noticed that my contributions were questioned more quickly. Now, as a lecturer, I sometimes see that students accept algorithmic feedback more readily than my comments. It feels as if the system’s voice counts as more neutral than mine.” This articulation exposes how algorithmic authority intersects with existing hierarchies of recognition. What appears as neutrality can reproduce established asymmetries, as recognition is redistributed along gendered and migration-related lines. This resonates with
Honneth’s (
1996) argument that recognition depends on dialogical justification,
Fraser’s (
2003) insistence on the inseparability of recognition and redistribution, and
Besand’s (
2016) observation that ideologies of inequality persist where some voices remain less legitimate.
Viewed across responses, these articulations do not represent isolated decisions but continuities sedimented along educational trajectories. Feelings of lost judgment, uncritical reliance or diminished credibility illustrate how agency and structure remain intertwined (
Krüger & Marotzki, 2006), how biographies draw reliability from institutional frameworks (
Kohli, 1985,
1986) and how fragility becomes visible in transitional phases (
von Felden, 2006). Algorithmic evaluation inscribes itself into these trajectories by producing new textualities of recognition (
Schulze, 1999,
2006) and narrowing opportunities for self-thematization and reflexive positioning (
Hahn, 1987,
1988).
Pedagogically, the relational orientation reveals an ambivalence that is simultaneously adaptive and precarious. On one hand, automated systems provide relief under conditions of acceleration, enabling lecturers and students to redistribute their energies (
Williamson, 2017). On the other hand, this relief narrows education to procedural fairness and instrumental rationality, sidelining relational work and dialogical justification. Bildung presupposes the cultivation of responsible self-determination (
Klafki, 1996) and responsiveness to the call into responsibility (
Biesta, 2006). Moments of disruption, such as articulations of lost judgment or diminished credibility, can serve as transformative occasions for renegotiating recognition (
Koller, 2012). Didactic responsibility, as
Apitzsch (
2018) argues, lies in creating pedagogical contexts where evaluative authority is critically reappropriated rather than ceded to infrastructures.
From a democratic perspective, the implications are significant.
Foucault’s (
2006) concept of governmentality highlights how infrastructures guide conduct while redistributing authority in barely perceptible ways. When credibility is channelled into algorithmic systems, dialogical justification becomes weakened and symbolic legitimacy is restructured. These dynamics reinforce existing inequalities, particularly those linked to gender and migration. By reordering recognition under the guise of neutrality, algorithmic systems reshape the conditions of democratic participation (
Besand, 2016;
Fraser, 2003;
Honneth, 1996). Rather than functioning as external tools, they become infrastructural media through which acceleration, workload and institutional contradiction are rendered effective. The relational educational orientation therefore shows how evaluative authority and affective resonance are reorganized when algorithmic mediation becomes embedded in HE. The task for pedagogy is not only to expose these shifts but to create spaces in which their ambivalences can be critically interrogated. Only then can Bildung and democratic participation be sustained under conditions where authority silently migrates into infrastructures.
4.4. Ambiguous Orientation: Improvisation and Fragility
A fourth educational orientation revolves around the pervasive experience of in-between spaces in which generative AI is situated within HE. What participants described was not primarily fascination with technology but rather the constant confrontation with contradictory expectations: in some courses AI was strictly prohibited, in others its use was implicitly taken for granted. Instead of clarity, students and lecturers reported a persistent condition of liminality, a space in which orientation had to be improvised from one day to the next.
One lecturer summarized this uncertainty pointedly: “
During my career I have seen many reforms, and with AI it feels similar: there are official guidelines, but nobody can tell us what really happens in the background. It feels like we are improvising all the time” (L1). Her remark gains meaning when viewed biographically: her long career had already been marked by continuous reforms and shifting demands, so the introduction of AI became another moment of institutional fragility. The sense of “constant improvisation” reflects what Kohli described as the erosion of the institutionalized life course, in which stable structures normally provide reliability and orientation (
Kohli, 1985,
1986). Where such structures are contradictory or opaque, academic practice no longer offers security but forces biographical navigation on unstable ground. Egger’s account of the tension between institutional expectations and individual meaning-making helps illuminate this dynamic, showing how lecturers like L1 are pushed into improvisation because institutions fail to provide clear frameworks (
Egger, 1995,
2006). From the learner’s side, the picture is similar. One student explained: “
In some seminars we are told not to use AI at all, in others it is almost expected that we use it for preparation. You never know what is actually allowed, and every decision feels risky” (S9). Her irritation makes visible how prohibition and expectation coexist in unstable ways, producing what Wimmer conceptualized as boundary-making, where institutions redraw the lines of legitimacy in contradictory forms (
Wimmer, 2008). Instead of providing orientation, such shifting boundaries place students in zones of uncertainty where every decision entails the risk of being sanctioned. S9’s account reflects Bourdieu’s notion of habitus, as her ability to navigate these contradictions is shaped by her cultural capital and prior educational experience. For some, improvisation may be navigable; for others, it deepens vulnerability and exclusion. The fragility of these in-between spaces becomes especially clear in the account of S11, a first-generation student: “
At the beginning of my studies I already had to figure out everything on my own, and now with AI it feels the same again, every teacher says something different, and I am never sure if I belong here” (S11). Her statement highlights how contradictions around AI re-activate earlier insecurities of transition and belonging.
von Felden (
2006) has shown that transitional phases are particularly vulnerable moments in which belonging and competence need to be reestablished. For S11, institutional ambiguity does not mitigate this fragility but deepens it, illustrating
Turner’s (
1969) notion of liminality as a state of suspension between stable roles. What she describes can be understood as a biographical turning point in Schütze’s sense, where irritation about ‘something not being right’ produces both dislocation and the potential for reflexive reorientation (
Dausien, 2004;
Schütze, 1984). Her oscillation between belonging and exclusion mirrors Breinig et al.’s notion of transdifference, in which individuals inhabit hybrid spaces that are neither fully inside nor fully outside (
Breinig et al., 2002).
These three accounts together indicate that ambiguity in HE is not a marginal phenomenon but a structural condition in which improvisation becomes the dominant mode of orientation. Responsibility shifts away from reliable institutional frameworks and is relocated onto individuals, leaving them simultaneously inventive and vulnerable. L1’s continuous improvisation, S9’s navigation of contradictory boundaries, and S11’s fragile belonging each illustrate how opacity in AI governance reflects the hidden curriculum of academic life (
Jackson, 1968). What remains concealed are not only the technical processes of AI but also the tacit institutional rules that structure legitimacy and orientation. As
Polanyi (
1966) noted, tacit knowledge normally provides stability by guiding action without explicit articulation, yet under algorithmic conditions this stabilizing function becomes fragile. Yet in these answers, what surfaces is not strong tacit knowledge but tacit non-knowledge: students and lecturers are left without reliable background resources, forced to improvise orientation where institutional guidance dissolves. Theoretically, this orientation demonstrates how ambiguity destabilizes both biographical and pedagogical processes. L1’s improvisation shows how the erosion of institutional reliability undermines the security of the academic life course (
Kohli, 1985,
1986). S11’s fragile belonging exemplifies how transitions expose learners to heightened vulnerability (
von Felden, 2006). S9’s oscillation between prohibition and expectation makes clear how institutions redraw legitimacy boundaries in contradictory ways (
Wimmer, 2008). Egger’s tension between expectation and meaning-making (
Egger, 1995,
2006) runs through all three cases, showing how individuals continuously balance institutional contradictions against their own sense of orientation. Schulze’s notion of biography as text (
Schulze, 1999,
2006) and Hahn’s work on self-thematization (
Hahn, 1987,
1988) help explain how algorithmic authority inscribes itself into life stories, limiting opportunities for reflexive positioning.
From the perspective of Bildung, these contradictions point to the fragile conditions under which responsible self-determination can develop. Klafki emphasizes that Bildung requires frameworks enabling learners to critically engage with central societal challenges (
Klafki, 1996). Where frameworks dissolve into contradictions, responsibility shifts onto individuals who improvise without dialogical support. Yet as Koller argues, contradictions can also become transformative occasions that provoke reflection on agency within contingent environments (
Koller, 2012). Benner’s view of Bildung as entangled with contingency further clarifies that institutional ambiguity makes this contingency sharply visible (
Benner, 1991). The dimension of inequality sharpens this interpretation. Several female and migrant lecturers reported that algorithmic authority was sometimes accepted as more neutral than their own judgment. What presented itself as impartiality thus reinforced long-standing asymmetries of recognition. Ambiguity, in this sense, is not experienced uniformly: those with stronger biographical and cultural resources can navigate in-between spaces with greater security, while others remain more vulnerable. This asymmetry can be read through Honneth’s idea that recognition depends on dialogical justification (1996), Fraser’s argument that recognition and redistribution are mutually constitutive (
Fraser, 2003), and Besand’s insight that ideologies of inequality endure where certain voices continue to lack legitimacy (
Besand, 2016).
The democratic implications become visible in several ways. Dewey emphasized that institutions structure opportunities for participation (
Dewey, 2000). Ambiguity weakens this structuring role, leaving participation distributed unevenly: those with greater cultural capital or stronger biographical resources can improvise effectively, while others risk exclusion. Rancière reminds us that democracy involves the enactment of equality, yet inconsistent frameworks reproduce asymmetries by privileging those already skilled in navigating contradictions (1999). Foucault’s concept of governmentality sharpens this perspective by showing how ambiguity itself can function as a mode of governance, compelling individuals to self-regulate in the absence of transparent rules (2006). The in-between orientation indicates that HE is not merely challenged by generative AI but also implicated in reproducing contradictions through ambiguous frameworks. Ambiguity appears less as a temporary deficit than as a structural condition that both restricts and provokes educational practice. Read through the vulnerability of transitions (
von Felden, 2006), the tension between institutional expectations and meaning-making (
Egger, 1995,
2006), and the liminality of in-between spaces (
Turner, 1969), it becomes evident that ambiguity can simultaneously undermine and deepen Bildung. The task for universities is therefore twofold: to establish reliable frameworks that sustain responsible self-determination, and to cultivate dialogical spaces in which contradictions are addressed collectively as shared conditions of democratic education rather than endured as private burdens.
4.5. Recognition Orientation: Voice and Visibility
The final educational orientation reconstructed from the written responses concerns recognition and the micropolitics of visibility. Generative AI was described not only as a technical aid but as an active force that shapes whose voices are amplified, which contributions gain legitimacy, and whose perspectives remain marginalized. Students and lecturers repeatedly pointed out that visibility and credibility no longer emerge solely from dialogical interaction but are increasingly distributed by algorithmic infrastructures.
One student respondent (S63) explained: “
On the platform, some posts get highlighted automatically and receive many comments, while others disappear without notice. I have seen that even thoughtful contributions sometimes vanish, and then nobody reacts. It changes what the group actually talks about.” Her irritation shows how infrastructures silently reorder attention, much like what Jackson described as hidden curricula, where unspoken structures guide participation and influence what counts as legitimate learning (
Jackson, 1968). What disappears without explanation, as in her case, can gradually sediment into orientations of misrecognition that shape not only momentary interaction but also the long-term disposition to participate. Such dynamics illustrate Honneth’s account of recognition struggles, where the withdrawal of credibility erodes engagement and weakens the foundations of self-confidence (2012). Another student (S71) described a similar experience: “
You can tell that the system privileges certain styles of writing. Short, polished, and formal. If you do not match that, you stay invisible, no matter how much effort you put in.” Her observation points to how algorithmic infrastructures set tacit standards that sort contributions into visible and invisible. Polanyi’s notion that much of practice relies on tacit knowledge (
Polanyi, 1966) helps clarify the contrast observed here: tacit routines that usually provide security are replaced by what might be described as tacit non-knowledge, since the student lacks the background resources that would allow adaptive orientation.
The resulting invisibility becomes a formative experience, reviving earlier insecurities within her learning trajectory. Schütze’s concept of biographical turning points offers a useful lens for this process, showing how moments of misrecognition can unsettle habitual orientations and initiate new configurations of meaning (
Dausien, 2004;
Schütze, 1984). Whether such experiences deepen withdrawal or open paths of reflexive reorientation depends on the resources available within a biography. The lecturers’ perspective confirmed similar shifts. One respondent (L24) explained:
“In discussions, students sometimes cite the automated feedback as more reliable than peer comments or even my remarks. It alters the hierarchy of voices in the classroom.” Her remark shows how algorithmic mediation reshapes legitimacy by positioning automated outputs as epistemically superior. Egger’s analysis of the tension between institutional expectations and individual meaning-making helps to situate this: what changes here is not simply personal authority but broader conditions of evaluation that redistribute credibility into infrastructures (
Egger, 1995,
2006). Kohli’s account of institutional life courses adds another layer: when recognition frameworks become contradictory, uncertainty intensifies precisely at moments when orientation is needed (
Kohli, 1986).
The structural dimension of these struggles was articulated even more sharply by L28, a female lecturer with a migration background:
“When I was a student, I often noticed that my contributions were questioned more quickly than those of others, and I felt that both my gender and my migrant background shaped this experience. Now, as a lecturer, I sometimes see that my students accept algorithmic feedback more readily than my comments. It feels as if the system’s voice counts as more neutral than mine, even though I bring in expertise and personal experience.” Her account makes visible how algorithmic neutrality does not erase but can reinforce asymmetries of recognition. Butler’s insight that recognition is always framed by cultural intelligibility (
Butler, 1997) clarifies why some voices remain less audible, even in environments claiming neutrality. Fraser’s reminder that recognition and redistribution are inseparably linked highlights how L28’s marginalization reflects broader inequalities along intersecting lines of gender and migration (
Fraser, 2003). Yet the written responses also revealed counter-movements. Several students explained that they deliberately amplified the posts of peers whose contributions had been ignored, thereby enacting solidarity. Such practices show that recognition is not completely captured by infrastructures but can also be collectively renegotiated. Taylor argued that mutual acknowledgment is constitutive of identity (
Taylor, 1993), while
Tully (
2004) emphasized that struggles for recognition can open democratic spaces of dialogue. Acts of amplification may therefore function as biographical resources of resistance, enabling students to counter invisibility and expand the repertoire of orientations in academic life.
Experiences of invisibility and misrecognition also carried a liminal dimension. One first-generation student (S76) recalled: “
At the beginning of my studies I already had to figure everything out on my own, and now with AI it feels the same again, every teacher says something different, and sometimes I feel invisible.” Her words recall Turner’s idea of liminality, a suspension between established and emerging roles (1969). Positioned between recognition and misrecognition, her account illustrates both the risk of destabilization and the possibility of reflexive reorientation. Alheit’s notion of
Biographizität helps to explain why such moments can either sediment as withdrawal or be reinterpreted as occasions for new orientations, depending on the resources students are able to mobilize (
Alheit, 1993,
2003a). From an educational perspective, these findings highlight the fragile conditions under which recognition unfolds in HE.
Benner (
1991) emphasized that Bildung always takes place in contexts of contingency that demand reflexive judgment.
Prange’s (
2005) analysis of education as making visible clarifies why the invisibility described by S63 and S71 is pedagogically significant: when automated systems obscure contributions, formative dimensions of showing and recognition are displaced. At the same time,
Dausien’s (
2004) work reminds us that reflexivity often arises when recognition collapses, opening new interpretive horizons. What appears as misrecognition can therefore hold transformative potential if it is thematized and supported pedagogically. From a democratic perspective, these struggles raise pressing concerns. Fricker’s concept of epistemic injustice highlights that exclusion from credibility is not merely individual but systemic, shaping who can participate meaningfully in knowledge production (
Fricker, 2007).
Young (
2000) stressed that democracy depends not only on access but also on assurance that all voices count as equally valid. The accounts of S63, S71, S76 and L28 illustrate how algorithmic infrastructures privilege certain linguistic styles and cultural forms, reinforcing inequalities that universities should not ignore. Rancière’s insistence that democracy rests on the enactment of equality is relevant here: when infrastructures silently redistribute visibility, that enactment is undermined (1999). Honneth and Fraser’s debate on recognition and redistribution underscores that both dimensions must be addressed together if democratic education is to be sustained (
Fraser, 2001;
Honneth, 2011).
The written responses suggest that recognition under algorithmic mediation is not reducible to questions of efficiency or access. What is at stake is how contributions become visible and legitimate, how recognition and misrecognition accumulate across biographies, and how these dynamics intersect with existing hierarchies. Recognition appears as biographically sedimented and democratically charged: subjects struggle for visibility, legitimacy and voice in algorithmically filtered spaces. The challenge for HE lies in making these struggles visible, negotiating them collectively and transforming them pedagogically into occasions for reflexive orientation and democratic participation.
6. Implications for HE Didactics Under Algorithmic Conditions
The empirical analysis has reconstructed five orientations through which students and lecturers navigate generative AI in HE. These orientations reveal fragilities in judgment, recognition, and responsibility that are not signs of deficit but manifestations of how Bildung unfolds under algorithmic mediation. For HE didactics, a central task is therefore not merely the transmission of competencies but the capacity to transform such fragilities into productive pedagogical occasions. What is ultimately at stake is the university’s ability to render algorithmic pressures educable and to sustain democratic forms of subject-formation when infrastructures of visibility, evaluation, and orientation are technologically mediated.
As recent research emphasizes, this requires moving beyond generic appeals to “digital maturity” or “AI literacy”.
Watanabe (
2023) shows that opacity and the conditioning effects of algorithmic operations are not peripheral but central pedagogical challenges, since they determine what can be perceived, questioned, and contested in educational contexts. Didactic responses should therefore embed structured opportunities for reflection, dialogical reasoning, and recognition practices that turn algorithmic ambivalences into shared pedagogical resources (
Hummel et al., 2024). At the same time, two further dimensions are decisive: policy frameworks at the institutional level and teacher education at the professional level. Universities need guidelines that not only regulate the use of AI but explicitly conceptualize algorithmic infrastructures as educable phenomena. Likewise, didactic training must integrate reflexive approaches to AI use, enabling teachers to work productively with ambivalence rather than reduce it to technical skills. To strengthen transparency of the analytical process, the table includes representative quotes used during category development, illustrating how empirical articulations informed the reconstructed orientations.
Table 1 synthesizes the reconstructed orientations with their biographical embedding, pedagogical tensions, democratic-educational implications, and didactic strategies. It should not be read as an exhaustive set of categories or a normative model. Rather, it serves as a heuristic that connects empirical findings with HE didactics, highlighting how fragile orientations can become occasions for pedagogical work. The strategies are not recipes but proposals that make visible how algorithmic mediation can be engaged in teaching while remaining attentive to biographical trajectories and democratic conditions of recognition.
The contribution of this framework lies in its integration of educational theory, biographical research, and democracy-pedagogical perspectives into HE didactics. In contrast to approaches that emphasize AI literacy or digital citizenship as fixed skill sets, the focus here is on cultivating fragile orientations as educable conditions. Temporal sovereignty, recognition, and accountability cannot be conveyed as discrete competencies but must evolve within dialogical, curricular, and institutional practices. This perspective builds on
Dewey’s (
2000) notion of democracy as a lived practice,
Rancière’s (
1999) understanding of disruption as the enactment of equality, and
Honneth’s (
1996) view of recognition as the foundation of participation. It also refines
Himmelmann’s (
2007) dimensions of democracy by demonstrating how algorithmic infrastructures transform each of them—shaping autonomy, reorienting institutions, and redistributing the legitimacy of voices.
Policy frameworks and teacher education emerge as crucial mediating layers. Universities should establish guidelines that conceptualize algorithmic infrastructures as pedagogical phenomena, making opacity and fragility part of structured educational reflection rather than leaving them as hidden influences. At the same time, teacher education programs should integrate biographical and reflexive dimensions of AI engagement, equipping lecturers to transform algorithmic ambivalences into occasions for judgment, recognition, and democratic participation. By situating the five orientations within multi-level didactic strategies and embedding them in biographical perspectives, the framework shows how HE becomes a crucial arena for responding to the pressures of algorithmic mediation. Universities are simultaneously confronted with systemic risks of algorithmic governance and equipped to cultivate the reflexive and democratic capacities needed to address them. Acceleration can be reframed as an occasion for structured reflection, opacity as a shared field of inquiry, and asymmetries of recognition as a starting point for pluralization. The didactic task is to treat fragility as a productive condition, to translate algorithmic ambivalences into educable situations, and to maintain HE as a setting in which democratic subject-formation can be sustained. These didactic implications derive directly from the five reconstructed orientations and the biographical, pedagogical, and democratic tensions identified in the empirical analysis. They translate students’ and lecturers’ articulated patterns of coping, navigating opacity, renegotiating authority, improvising under ambiguity, and struggling for recognition into concrete strategies for fostering reflexivity and democratic participation under algorithmic mediation.
7. Conclusions and Outlook
The study has shown that generative AI is not simply an auxiliary tool in HE but a structuring force that influences how learning, judgment, and recognition take form. The five reconstructed orientations (pragmatic orientation, adaptive orientation, relational orientation, ambiguous orientation, and recognition orientation) illustrate how systemic pressures of acceleration, opacity, and infrastructural authority are refracted into everyday pedagogical practice. These orientations are not to be read as deficits but as fragile processes of Bildung, in which students and lecturers continuously negotiate autonomy, responsibility, and recognition under digital conditions.
Theoretically, the findings contribute to education-theoretical and democracy-pedagogical debates. They suggest that the capacity for responsible self-determination (
Klafki, 1996) and subjectification (
Biesta, 2006,
2013) acquires renewed importance when orientations are shaped by algorithmic infrastructures. Dewey’s conception of democracy as a way of life underlines that pedagogical spaces are vital for cultivating dispositions of trust, courage, and critique.
Rancière (
1999) emphasizes that democracy also depends on disruption and contestation, while
Honneth (
1996) and
Fraser (
2003) highlight that recognition is a precondition for participation.
Himmelmann’s (
2007) account of democracy as government, society, and way of life clarifies why universities play a decisive role: they are not only institutions of knowledge transmission but also arenas where democratic capacities are tested, challenged, and renewed.
Print and Lange (
2012) further point out that democratic education requires concrete pedagogical arrangements that enable students to practice participation in plural and sometimes contested contexts. Empirically, the analysis indicates that fragilities surface most clearly in transitional phases such as study entry, progression through programs, and preparation for professional life. These moments are marked by heightened uncertainty, and it is here that algorithmic mediation intersects with biographical orientation, recognition needs, and institutional frameworks. Rather than attempting to eliminate fragility, universities can work toward transforming it into an educable condition. Didactic strategies such as reflective assignments, dialogical inquiry into algorithmic feedback, and diversification of recognition practices illustrate how pedagogy can sustain plurality and reflexivity. Teacher education and professional development for lecturers are essential in this respect, as they can help establish the capacity to work productively with algorithmic ambivalences rather than avoid them. In this light, HE appears both vulnerable and formative. It is vulnerable because systemic risks of algorithmic governance—including acceleration, opacity, and asymmetries of recognition—permeate teaching and learning. Yet it is also formative because these very risks can be reframed pedagogically into occasions for judgment, reflexivity, and democratic participation. This duality positions universities at the center of societal democratization, where global technological pressures are translated into lived educational processes and democratic subject-formation remains possible.
Looking ahead, further research could broaden the comparative scope across disciplines and national contexts, examining how orientations vary with epistemic cultures and institutional traditions. Methodologically, the combination of reconstructive, biographical, and interpretive approaches offers a promising pathway for connecting empirical findings with educational theory and democracy pedagogy. On the policy level, universities would benefit from developing guidelines that not only regulate AI use but also render algorithmic infrastructures educable, turning opacity into a shared object of inquiry. The central didactic task is to design pedagogical arrangements that keep fragility educable, recognition plural, and democratic participation open. In doing so, universities can uphold their role as hinge institutions where Bildung, democracy, and digital transformation intersect in ways that remain reflexive, inclusive, and oriented toward the future of democratic education.