Next Article in Journal
Gender Differences in Lecturers’ Competency and Use of Learning Management Systems at a University of Technology in South Africa
Previous Article in Journal
Intellectual and Viewpoint Diversity: Importance, Scope and Bounds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning

Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), TUD Dresden University of Technology, 01069 Dresden, Germany
Educ. Sci. 2025, 15(12), 1594; https://doi.org/10.3390/educsci15121594
Submission received: 15 September 2025 / Revised: 20 October 2025 / Accepted: 21 November 2025 / Published: 26 November 2025
(This article belongs to the Topic Explainable AI in Education)

Abstract

As AI systems increasingly structure educational processes, they shape not only what is learned, but also how epistemic authority is distributed and whose knowledge is recognized. This article explores the normative and technopolitical implications of this development by examining two prominent paradigms in AI ethics: Ethical AI and Responsible AI. Although often treated as synonymous, these frameworks reflect distinct tensions between formal universalism and contextual responsiveness, between rule-based evaluation and governance-oriented design. Drawing on deontology, utilitarianism, responsibility ethics, contract theory, and the capability approach, the article analyzes the frictions that emerge when these frameworks are applied to algorithmically mediated education. The argument situates these tensions within broader philosophical debates on technological mediation, normative infrastructures, and the ethics of sociotechnical design. Through empirical examples such as algorithmic grading and AI-mediated admissions, the article shows how predictive systems embed values into optimization routines, thereby reshaping educational space and interpretive agency. In response, it develops the concept of situated ethics, emphasizing epistemic justice, learner autonomy, and democratic judgment as central criteria for evaluating educational AI. To clarify what is at stake, the article distinguishes adaptive learning optimization from education as a process of subject formation and democratic teaching objectives. Rather than viewing AI as an external tool, the article conceptualizes it as a co-constitutive actor within pedagogical practice. Ethical reflection must therefore be integrated into design, implementation, and institutional contexts from the outset. Accordingly, the article offers (1) a conceptual map of ethical paradigms, (2) a criteria-based evaluative lens, and (3) a practice-oriented diagnostic framework to guide situated ethics in educational AI. The paper ultimately argues for an approach that attends to the relational, political, and epistemic dimensions of AI systems in education.

1. Introduction

What if an Artificial Intelligence (AI) system recognized your confusion before you did? It slows the pace, selects more attuned resources, and offers subtle prompts. So far, so helpful. But what if the same system began to steer your learning not toward deeper understanding, but toward predicted outcomes? At what point does guidance become governance? When does optimization cross into manipulation? This is already taking place. In many educational settings today, AI-driven personalization shapes what students see, which pathways they follow, and how achievement is defined. Efficiency increases; performance metrics proliferate. Yet something less tangible begins to shift. Education is shaped not only by pedagogical intention but increasingly by opaque algorithmic structures. Core values of democratic education such as autonomy, fairness, and openness are reframed by the logic of prediction (Floridi & Cowls, 2019; Luckin, 2018). Zhang and Aslan (2021) highlight both the potential and the blind spots of educational AI, calling for renewed attention to the conceptual foundations of learning. Technology, as understood here, is not a neutral instrument but a socio-epistemic force that co-produces meanings, values, and institutional arrangements (Bartok et al., 2023; Donner & Hummel, in press; Hummel et al., in press; Hummel et al., 2025a).1 Rather than treating education as a target of optimization, this article approaches pedagogy as a domain with its own normative structure, increasingly shaped through AI.
Two frameworks have emerged in response: Ethical AI and Responsible AI. Though often treated as interchangeable, they reflect distinct philosophical trajectories. Ethical AI draws from moral theory, invoking principles such as autonomy, justice, and beneficence to evaluate system behavior (Barocas et al., 2023; Kant, 1785/1998; Mill, 2011). Responsible AI, in contrast, centers on governance, emphasizing accountability, participatory design, and social embeddedness (Dignum, 2019; Fjeld et al., 2020; Floridi & Cowls, 2019). One asks, “Is this action morally right?” The other, “Who decides, and with what consequences?” The tension between these paradigms is not merely terminological. It reflects structural divergences between abstract principle and situated practice, between universalism and contextual sensitivity. It also raises a broader philosophical challenge: how to conceptualize agency, responsibility, and justice in learning environments shaped by technical systems. This analysis moves beyond applied AI ethics to consider how such systems participate in the reconfiguration of normative architectures. To engage this task, the article draws on five traditions: deontology (Kant, 1785/1998), utilitarianism (Mill, 1863/1987), responsibility ethics (Jonas, 1979), contract theory (Rawls, 1971), and the capability approach (Nussbaum, 2000; Sen, 1999). These traditions do not offer a unified framework, but illuminate conceptual tensions between duty and consequence, risk and precaution, equality and capability that clarify what is at stake when AI systems enter education. These tensions invite renewed attention to epistemic justice (Fricker, 2007), particularly in environments where recognition, credibility, and authority are distributed algorithmically.
AI systems are examined here not as neutral computational tools, but as epistemic agents that shape what is considered credible, knowable, and valuable. As illustrated by AI-mediated admissions decisions (Benossi & Bernecker, 2022), they influence visibility, shape opportunity structures, and participate in defining what is recognized as valid knowledge. These cases exemplify a broader trend: the embedding of educational value within predictive infrastructures. This article contributes to the philosophy of technology and AI ethics by offering a comparative analysis of Ethical and Responsible AI in educational contexts. Section 2 outlines their conceptual foundations. Section 3 examines ethical tensions between them. Section 4 explores implications for personalization, fairness, and epistemic justice. The final section proposes a situated ethics that reframes AI not as an optimization tool, but as a pedagogically consequential actor embedded in civic infrastructures of learning. To guide this inquiry, the article addresses the following research questions: (1) How do Ethical AI and Responsible AI differently frame agency, justice, and decision-making in educational settings? (2) In what ways can a situated ethics perspective reorient these paradigms toward epistemic justice, learner autonomy, and democratic co-agency? (3) How can such a perspective inform concrete evaluative and pedagogical practices under conditions of algorithmic mediation?
In response to these questions, the article makes three distinct contributions to the field:
  • It offers a conceptual map that contrasts Ethical AI and Responsible AI as normative architectures.
  • It develops an evaluative criteria set grounded in situated ethics, centered on recognition, contestability, reflective autonomy, and institutional responsiveness.
  • It proposes a practice-oriented diagnostic framework that uses these criteria to assess AI systems in educational contexts and aligns this work with the tabular tools presented later.
Rather than merely applying established ethical theories, this research critically examines their respective analytical affordances and limitations to clarify what each paradigm reveals and what it risks obscuring in AI-mediated education. The article proceeds through a conceptual-normative methodology, combining philosophical reconstruction with evaluative inference. Rather than testing predefined hypotheses, it employs a logic of inference that moves between ethical theory, institutional analysis, and pedagogical implication. This approach draws on theory synthesis, assembling deontological, consequentialist, responsibility-based, contractarian, and capability-oriented lenses, not to merge them into a single framework but to clarify their differential analytical affordances. In this sense, conceptual distinctions are treated as diagnostic instruments that reveal where normative tensions become pedagogically relevant and how they can be translated into evaluative criteria for practice. The orientation is thus neither purely theoretical nor empirical but situated at the intersection of normative critique and institutional interpretation, consistent with the reflexive tradition of educational philosophy. With this methodological stance clarified, the following section reconstructs Ethical AI and Responsible AI as distinct normative architectures.

2. Mapping AI Ethics: Philosophical Grounds and Educational Implications

AI ethics does not converge on a single theoretical foundation. It unfolds across a landscape of competing frameworks, each drawing from distinct philosophical lineages and addressing different dimensions of moral reasoning. This section examines two of the most influential paradigms in current discourse: Ethical AI and Responsible AI. These paradigms are not treated as fixed models, but as evolving normative architectures. Through a reconstruction of their respective logics, the analysis attends to their underlying assumptions, points of conceptual friction, and relevance for the design and governance of educational technologies.2 Rather than adjudicating between them, the aim is to clarify what each paradigm renders visible, what it leaves implicit, and how both shape the ethical vocabulary available to educational practice.

2.1. Ethical AI: Moral Rightness and Normative Clarity

To make the conceptual field analytically graspable, the following sub-sections distinguish Ethical AI and Responsible AI as two normative architectures that orient current debates. Ethical AI is typically framed through deontological or consequentialist reasoning: what is right, what is good, what ought to be done. These frameworks offer criteria for assessing system behavior through established moral principles. Here, Ethical AI refers to approaches grounded in moral theory, especially deontology and consequentialism, as distinct from compliance schemes or corporate ethics. It is treated as a philosophical paradigm that enables critical inquiry into the normative foundations of educational technology. Deontological ethics, as developed by Kant (1785/1998), grounds morality in duty, universality, and respect for persons. Actions are right not because of their outcomes but because they conform to universal maxims and treat individuals as ends. Applied to AI, this tradition emphasizes autonomy and the imperative not to instrumentalize human subjects. In education, this translates into concerns about manipulation, bypassing consent, or reducing learners to data points. From a Kantian perspective, AI systems in education are only ethically acceptable if they honor the learner as an autonomous subject capable of rational self-legislation (Benossi & Bernecker, 2022; Dierksmeier, 2022). Autonomy here does not mean having multiple options generated by an algorithm, but the capacity to take a reflective stance toward those options and to accept or reject them as one’s own. If adaptive systems predetermine what counts as a valid learning path and leave no space for interpretive distance, they risk reducing the learner to a means for system optimization rather than recognizing them as an end in themselves.
Utilitarian reasoning shifts the evaluative center of gravity. For Bentham (1996) and Mill (1863/1987), the ethical value of an action or a technology lies in its ability to increase collective benefit. In educational AI this translates into a focus on efficiency: higher engagement rates, improved average performance and reduced dropout numbers are taken as moral indicators. Under such a logic, a system that improves overall statistical outcomes appears successful even when it constrains the agency of those whose learning trajectories do not conform to dominant patterns. It is at precisely this point that tension becomes visible. What Kant identifies as ethically impermissible—the use of individuals as an instrument for aggregate gain—utilitarianism can accept as a reasonable trade-off. Barocas et al. (2023) show that optimization processes in machine learning can improve group-level accuracy while systematically misclassifying learners with non-standard linguistic or cognitive profiles. In this sense, statistical improvement can coincide with epistemic exclusion. Responsibility ethics, especially as articulated by Jonas (1979), enters as a critique of both positions. While Jonas shares with Kant a concern for non-instrumentalisation, he argues that neither present autonomy nor immediate utility is sufficient when dealing with technologies that produce long-term systemic effects. The asymmetry between technical action and future consequence, he suggests, requires a precautionary stance that keeps the ethical horizon open beyond what is immediately visible to designers or users. Rawls (1971) introduces yet another inflection. He accepts that inequalities in system outcomes may be permissible, but only if they can be justified to those most disadvantaged under fair deliberative conditions. In this shift, AI ethics acquires a procedural dimension: legitimacy no longer follows only from principle or outcome, but from institutional arrangements that allow those affected to contest and revise the evaluative logic embedded in the system.
The capability approach, developed by Sen (1999) and Nussbaum (2000), exposes a further limitation shared by all previous perspectives. Both autonomy in the Kantian sense and fairness in the Rawlsian sense presume that individuals already possess the real freedom to position themselves as agents within a system. In educational AI, this cannot be assumed. A system may be transparent, statistically efficient, cautious in its design and procedurally accountable, yet still fail if learners do not have the capability to act as epistemic agents, to articulate their standpoint and to have it recognized as legitimate within the evaluative structure generated by the AI. These tensions expose a deeper abstraction problem. Ethical AI offers clarity but often at the cost of situational sensitivity. Kantian and utilitarian models rest on principles elegant in theory but difficult to apply in pluralistic, stratified settings. The issue is not only what is right or beneficial, but for whom, under which conditions, and with what consequences. As emphasized in the introduction, AI systems are not moral agents in the human sense. They lack intentionality and deliberation. Moral evaluation concerns their design, deployment, and institutional effects, including how they shape agency, recognition, and value. This invites a shift: AI is not just a tool but a socio-technical assemblage that mediates knowledge and enacts epistemic performativity. Verbeek (2011) emphasizes that these systems are involved in the moral mediation of practices, shaping how actions and responsibilities are configured. Ethical questions thus extend beyond rule-following to how systems co-produce norms and pedagogical possibilities. This resonates with postphenomenological views of technology as co-constitutive of human experience and agency (Ihde, 1990; Rosenberger & Verbeek, 2015).
Ethical AI formalizes ideals and anchors critique but remains incomplete if applied without regard for contextual asymmetry or distributed agency. As Fricker (2007) notes, ethics involves not only principles but also recognition, including who is considered a knower and decision-maker. In educational AI, this includes asking whose success is optimized, which knowledge is legitimized, and how agency is reconfigured. While Fricker (2007) offers an analytic foundation, broader engagement with feminist, decolonial, and postdigital perspectives is essential to address epistemic exclusions in AI-mediated education (Code et al., 1993; Knox et al., 2020; Medina, 2013). Education is not merely cognitive delivery, but a normative project concerned with autonomy, critical reflection, and democratic participation. This perspective informs the ethical analysis developed here as a philosophical inquiry into how values, subjectivities, and futures are co-constituted within algorithmically shaped environments. To prepare the subsequent analysis, it is essential to emphasize that these ethical traditions are not merely alternative vocabularies but distinct normative optics: each brings specific affordances while simultaneously obscuring other dimensions of educational AI. Deontological and utilitarian ethics render questions of moral rightness and outcome legible but tend to bracket contextual power asymmetries. Responsibility ethics, contract theory, and the capability approach, by contrast, shift attention toward futurity, institutional obligation, and situated agency, yet they risk losing the clarity of principle-based critique. It is precisely in the tension between these partial illuminations that the need for a situated ethics specific to educational contexts becomes visible.

2.2. Responsible AI: Process, Pluralism, and Situated Judgments

If Ethical AI clarifies the principled horizon of evaluation, Responsible AI redirects attention to the processes through which legitimacy is negotiated. Responsible AI emerged in response to such limitations. It shifts focus from outcomes to the processes by which AI systems are designed, implemented, and governed. The central question becomes “Who is involved in shaping outcomes, and under what conditions?” Rather than referring to policy or compliance alone, Responsible AI here denotes a normative paradigm rooted in procedural ethics, participatory governance, and situated judgment. Explainability has been proposed as foundational, aiming to render model behavior intelligible and contestable (Barredo Arrieta et al., 2020). This analysis does not reduce procedural ethics to policy design, but treats Responsible AI as a paradigm informed by moral theory, institutional critique, and epistemological pluralism. Clarke (2019) suggests that translating responsible AI into practice requires not only principles but well-defined evaluative routines. This orientation draws on several philosophical sources. Jonas’s (1979) ethics of responsibility argues that modern technologies, due to their far-reaching consequences, demand foresight, humility, and responsibility for future generations. Unlike Kantian ethics, Jonas highlights the asymmetry between action and knowledge. In AI, this supports precautionary design and inclusive governance. Responsibility here includes both prospective consequences and collective accountability within socio-technical ecosystems. Daly et al. (2021) emphasize that algorithmic governance raises constitutional issues regarding rights and democratic oversight.3 Rawls’s (1971) contract theory complements this by framing justice as fairness emerging through deliberation under equal conditions. In education, this entails including diverse actors—students, educators, developers—in articulating the purposes and values guiding AI (Dignum, 2019; Fjeld et al., 2020). Yet questions remain about whose voices are legitimized and how participation avoids reproducing hierarchies. Following Feenberg’s (1999) concept of democratic rationalization, Responsible AI becomes a site where technical design encodes and contests institutional power. The capability approach (Nussbaum, 2000; Sen, 1999) reframes education not as competence acquisition, but as the expansion of real freedoms, the capability to think, choose, and act. Responsible AI resonates with this view by enabling agency through design, rather than prescribing optimal paths. Education is understood as a space for autonomy, critical reflection, and democratic participation, not merely behavioral optimization. However, this paradigm has limits. By emphasizing process, Responsible AI risks reducing ethics to governance. Without anchoring, participation may become tokenistic, and pluralism may obscure systemic asymmetries. While it highlights complexity, it may lack substantive criteria for judging whether practices are not just inclusive, but just. As Harding (1991) emphasizes, inclusive design requires epistemological reflexivity and sensitivity to positionality, power, and standpoint. Couldry and Mejias (2019) introduce data colonialism to describe how AI systems participate in extraction and control, revealing the material and geopolitical asymmetries of AI governance. Liberal democratic and procedural accounts offer important tools for evaluating fairness, accountability, and transparency. Yet these frameworks reflect a specific normative horizon that may not capture the ethical pluralism needed for global, context-sensitive AI. Broader ethical inquiry must include non-Western perspectives that question liberal assumptions about responsibility, legitimacy, and agency (Code et al., 1993; Knox et al., 2020; Medina, 2013). These contribute to a fuller understanding of how justice is conceptualized and enacted in diverse contexts. This resonates with technopolitical theory, postcolonial critique, and feminist epistemology, which offer tools to examine how AI infrastructures shape recognition, decision-making, and educational agency.4 Where Ethical AI appeals to universal norms, Responsible AI highlights epistemic contexts, dialogic processes, and situated justice. In doing so, Responsible AI expands the ethical field by centering design processes, participation, and political accountability. Yet like the other traditions, it produces a selective focus: it renders questions of governance and pluralism visible while leaving power asymmetries and substantive criteria of justice only partially articulated. To expose these patterned apertures across all five ethical lineages, the following comparison is introduced not as a descriptive summary but as a diagnostic instrument that clarifies what each paradigm makes thinkable and what it systematically leaves opaque in educational AI. In this sense, Table 1 does not simply list conceptual contrasts between Ethical AI and Responsible AI, but functions as a heuristic transition toward a situated ethics perspective. In this article, situated ethics designates an evaluative orientation structured by four criteria: (a) Recognition, (b) Contestability, (c) Reflective Autonomy, and (d) Institutional Responsiveness. Rather than serving as abstract principles, these criteria provide a pragmatic vocabulary for examining how AI systems configure epistemic agency and democratic participation in educational settings. The following table does not merely summarize both paradigms but stages them as diagnostic lenses that suggest distinct evaluative logics in Higher Education (HE).

2.3. Tensions and Transformative Potentials Between Ethical and Responsible AI

The analytical separation of both paradigms makes it possible to examine their tensions not as contradictions, but as productive frictions. The divergence between Ethical and Responsible AI is not binary but dialectical. One emphasizes principle, the other process. One pursues moral universality, the other democratic legitimacy. In education, they articulate distinct normative imaginaries: Ethical AI invokes Bildung, not merely cultural self-cultivation, but a reflective process of autonomy and moral formation. Responsible AI emphasizes democratic education, centering pluralism and participatory justice (Zawacki-Richter et al., 2019). This dialectic is not opposition but generative tension. Each paradigm reveals the other’s blind spots. Principle without process risks closure; process without principle risks managerial neutrality. Their interplay discloses deeper philosophical stakes: how we conceptualize learners as moral and epistemic agents, the educator’s role in technified settings, and the contours of the ethical space emerging between them. This space is not fixed by technology or tradition. It is shaped through negotiation where normativity, institutional power, and technological mediation intersect. These negotiations echo the critical philosophy of technology, which rejects instrumentalist views of technology. Design is not neutral, but a site of normative inscription and social struggle. Feenberg (1999) argues technologies reflect embedded social choices. Verbeek (2011) extends this by showing how technologies mediate ethical perception and modes of engagement. AI systems are thus formative infrastructures. They condition epistemic agency, shape visibility, and organize educational opportunity.
The dialectic between Ethical and Responsible AI also highlights epistemic justice. Beyond inclusion or data fairness, this means cultivating epistemic agency, attributing credibility, and legitimizing interpretive frameworks. Fricker (2007) distinguishes testimonial and hermeneutic injustice: who is heard, and who can articulate experience. But epistemic justice cannot rest on analytic epistemology alone. As emphasized by Code et al. (1993), Medina (2013), and Knox et al. (2020), feminist, decolonial, and postdigital perspectives are crucial for theorizing how knowledge and legitimacy are differentially produced and silenced in algorithmic systems. These approaches extend the normative frame by addressing historic exclusions and how AI reconfigures them. This reorientation implies methodological consequences. It resists purely abstract theorizing and formalist consultation. Instead, it advocates methodological pluralism: integrating normative critique with empirical sensitivity to how values are enacted and transformed. Here, ethical meaning emerges not prior to, but through, sociotechnical entanglements of design, use, and contestation. Thus, the tension between Ethical and Responsible AI is not a flaw to resolve. It is a conceptual resource for reimagining ethical orientation, pedagogical legitimacy, and democratic participation within algorithmic education. To make this tension analytically productive rather than merely descriptive, the next step is to examine the ethical traditions underlying both paradigms not as closed moral systems, but as distinct ethical lenses. Each highlights particular normative affordances—autonomy, utility, precaution, justice, capability—while simultaneously backgrounding other dimensions such as power, recognition, or institutional embeddedness. The following comparative schema therefore stages these traditions as partial perspectives rather than comprehensive solutions, clarifying what each brings into focus and what it structurally leaves in the shadows (see Table 2).
These ethical lenses form not a unified doctrine but a dynamic ethical field. It is precisely within the gaps and frictions they expose that the need for Situated Ethics becomes visible, one that neither abandons principle nor dissolves ethics into governance, but reorients evaluation toward Recognition, Contestability, Reflective Autonomy and Institutional Responsiveness.

3. Ethical Tensions in Educational AI: Personalization, Fairness, and Epistemic Justice

This section translates the conceptual contrasts outlined above into concrete ethical tensions that materialize when AI systems are embedded in educational practice. Rather than treating these domains as isolated ethical problems, the analysis approaches them as interrelated fields in which AI systems shape the conditions of agency, recognition and participation. The question of epistemic justice in AI-mediated education is not merely about data bias or model transparency. It concerns the ontological conditions under which certain knowledges, perspectives, and reasoning styles become accessible or excluded. Heidegger’s (1949/2005) critique of modern technology as Gestell, a revealing that enframes the world as standing reserve, offers a powerful lens. When educational AI systems operationalize learning through optimization logics, knowledge risks being reduced to input-output patterns: what can be processed, classified, predicted. Other forms of knowing that resist such instrumentalization are obscured. This framing shapes epistemic justice. What is not legible to the system becomes pedagogically irrelevant. Learners who diverge from expected data patterns appear anomalous, misclassified, or invisible. As Heidegger (1949/2005) suggests, when revealing is governed by technological enframing, what escapes calculation is not simply overlooked but actively concealed. A critical ethics of AI in education involves attentiveness to what remains unmeasured, unarticulated, and epistemically marginalized.

3.1. Personalization and Ethical Conditions of Autonomy

To clarify how Ethical AI, Responsible AI and situated ethics differ in practical evaluation, this subsection interrogates personalization technologies as a test case for autonomy under algorithmic mediation. Personalization is often presented as AI’s most transformative promise in education. By adapting content, pacing, and feedback to individual learners, AI systems aim to optimize educational trajectories. Yet such optimization can narrow rather than expand learner autonomy. Philosophically, autonomy is not merely the capacity to choose from prestructured options, but the ability to reflectively endorse one’s learning path. In Kantian ethics, autonomy is understood as self-legislation: the capacity of rational agents to bind themselves to principles that can be willed as universal laws (Kant, 1785/1998). AI-mediated personalization often prescribes paths based on historical data, behavioral predictions, and system-defined goals, optimizing for measurable outcomes that may not align with learners’ evaluative standpoints. From a governmentality perspective (Foucault, 1979; Rose, 2015), personalization may function as a technique of subjectivation that aligns learners with logics of performativity and self-optimization. Though AI lacks human intentionality, its optimization logic encodes normative priorities that shape perceived agency. Often opaque, such systems reduce learners’ capacity for critical engagement. This problem has been highlighted in philosophy of technology, where scholars emphasize that personalization should not merely fit learners into institutional metrics, but support conditions for genuine self-determination. While this critique builds on Verbeek’s (2011) concept of technological mediation, Feenberg’s (1999) critical theory of technology and Simondon’s (2020) philosophy of individuation deepen the analysis. They expose how design inscribes social rationalities and conditions subject formation. In this view, technological design itself becomes a locus of normative inscription, rather than a neutral backdrop. At the same time, non-Western and relational conceptions of autonomy challenge liberal–individualist assumptions. Frameworks such as Ubuntu and Confucian ethics reframe autonomy as relational integrity and mutual flourishing. Ubuntu’s maxim “I am, because we are; and since we are therefore I am” (Mbiti, 1969, p. 106) emphasizes embeddedness in relational networks. Confucian ethics centers self-cultivation as intergenerational and socially mediated, suggesting that the legitimacy of personalization systems depends on alignment with collective moral development. This pluralist orientation shapes the article’s methodology. Rather than applying one ethical theory, the analysis integrates Kantian deontology, governmentality studies, and philosophy of technology with insights from relational traditions. This enables an interrogation of personalization systems attentive to both technical architectures and normative imaginaries. The goal is not to check ethical boxes but to reveal the values inscribed in design assumptions and optimization routines. Consider adaptive systems like Squirrel AI or Century Tech. These platforms use behavioral data and prediction models to guide learners along predefined paths. Though efficient, they offer limited transparency and few opportunities for reinterpretation. Autonomy as reflective self-determination is not only unmet but structurally foreclosed. The system anticipates the learner, rather than inviting reflective choice and dialogical orientation.
Arendt’s (1958) concept of judgment offers a counterpoint. For her, judgment is not predictive but situated reflection within a plural world. Educational spaces should foster such reflective engagement. Predictive AI risks undermining the intersubjective conditions necessary for judgment and agency. Thus, AI systems must be seen not as neutral tools but as epistemic and normative infrastructures. Following Verbeek (2011), Feenberg (1999), and Simondon (2020), technologies mediate moral perception and educational experience. The article does not propose a definitive model for ethical personalization but clarifies the normative tensions at stake and the conditions under which AI might support, rather than hinder, ethical formation.5 This includes a commitment to methodological pluralism, combining normative theory with empirical attentiveness to the sociotechnical realities of AI-mediated education. The move from diagnostic critique toward evaluative orientation requires naming the ethical conditions that these tensions place into view. What emerges here is not merely a dispute over personalization but the opening of a situated ethics attentive to recognition, contestability, reflective autonomy and institutional responsiveness as distinct dimensions of justice in AI-mediated education.

3.2. The Ethics of Fairness in Educational AI: Normative Conflicts and Epistemic Injustice

Where personalization raises questions of autonomy, fairness foregrounds distributive and recognitional justice. This subsection examines how different ethical paradigms frame the problem of bias, legitimacy and epistemic exclusion in AI-mediated assessment and selection processes. AI in education raises deep concerns about fairness. As documented in multiple studies (Baker & Hawn, 2021; Barocas et al., 2023), machine learning systems often reproduce and amplify existing social biases. In contexts like admissions, grading, or recommendations, such biases may have long-term effects on learners’ opportunities and self-perceptions (Egger, 2006). Ethical AI frameworks respond with fairness metrics such as demographic parity or equalized odds, aiming for statistical equity across groups. Yet as Binns (2018) argues, these metrics are not neutral; they reflect normative assumptions about merit, need, and equality.6 Thus, fairness metrics are value-laden and contested. These metrics become operational through algorithmic systems like classifiers or language models that optimize for accuracy under fairness constraints. For instance, automated essay scoring might rely on embeddings (e.g., word2vec, BERT), which encode cultural and stylistic biases embedded in the training data. What seems computationally neutral may perpetuate epistemic asymmetries.
Responsible AI reframes the issue. It emphasizes who defines fairness, who participates in system design, and how power operates. This aligns with Rawlsian justice (Rawls, 1971) and democratic education, but is vulnerable to institutional inertia. Fairness thus becomes a process of recognition as well as distribution. Following Fricker (2007), epistemic injustice arises when individuals are denied credibility or interpretive resources. In education, such injustice becomes encoded when certain linguistic styles, cultural references, or learning strategies are misrecognized by AI. Algorithmic systems can be understood not merely as neutral tools, but as sociotechnical assemblages (Latour, 1993) or technical individuations (Simondon, 2020), insofar as they incorporate implicit models of learners, knowledge, and success. As Floridi and Cowls (2019) note, such systems co-produce decisions through distributed agency. They are epistemic artifacts with ontological force. Critical views from postcolonial theory, Indigenous data sovereignty (Kukutai & Taylor, 2016), and feminist epistemology (Harding, 1991; Haraway, 1988) question assumptions of objectivity and universalism in algorithmic design. Fairness must thus be situated within broader struggles over epistemic legitimacy, cultural pluralism, and data ownership. Dominant norms may marginalize Indigenous, plurilingual or hybrid literacies. Questions arise: Who controls the data? Whose interests shape optimization? Learners often lack influence over these systems, raising issues of epistemic extractivism. Participatory governance and institutional accountability become essential. Debates on the EU AI Act and Open Data policies highlight the political economy of fairness (European Parliament & Council, 2024).7 Moreover, fairness metrics embed normative frameworks rarely made explicit. They risk technocratic reductionism and ethical displacement. In education, this may suppress diverse learning practices or reinforce standardized pedagogies. From a Foucauldian perspective, fairness metrics may act as technologies of governance, converting ethical-political debates into technical administration. This narrows deliberation, echoing Arendt’s (1958) concern with the erosion of public judgment and plurality. An expanded concept of fairness can be informed by educational theory and critical pedagogy. From this perspective, education is not reduced to behavioral adaptation but is approached as a process involving self-formation, solidarity, and judgment (Biesta, 2006; Jörissen & Marotzki, 2009; Klafki, 1996). Fairness may encompass dialogical learning, hermeneutic openness, and mutual recognition. AI can be examined not only for equality of outcome, but also for how it supports or limits epistemic agency and the development of subjectivity. This position involves methodological reflexivity, recognizing the situatedness of analytical perspectives. It draws on political philosophy, ethics of technology, and educational theory. Scholars like Haraway (1988), Harding (1991), and Habermas (2022) demonstrate that knowledge production is partial and mediated. Within this frame, fairness appears not as a static metric but as a context-bound epistemic horizon for ongoing negotiation and reinterpretation.

3.3. Epistemic Agency and Democratic Subject Formation

Building on autonomy and fairness, this subsection deepens the analysis by focusing on epistemic agency—asking not only how learners are evaluated, but how their interpretive authority and voice are configured within AI-mediated learning environments. One central issue concerns the nature of knowledge and the role of learners as epistemic agents. Educational AI systems not only support learning but influence what is recognized as valid knowledge, which reasoning styles are preferred, and whose voices are amplified. These systems embody epistemological assumptions and power dynamics, structuring what Eynon and Young (2020) and Williamson and Eynon (2020) describe as epistemic governance. Learners may be treated less as participants and more as data sources, raising questions about education as a space for dialog and shared inquiry. These dynamics raise ontological as well as political questions. AI systems may be understood as sociotechnical assemblages or individuating processes, enacting particular epistemologies. Rather than functioning as neutral intermediaries, they co-constitute epistemic realities within institutional contexts. In light of Schlicht’s (2017) reflections on Kantian philosophy, it is relevant to ask whether AI systems can participate in epistemic intentionality. This includes examining the kind of learner subjectivity they enable: whether critical judgment or compliance, and whether learners are treated as ends or as means to predefined outputs. The analysis draws on pluralist, non-foundationalist epistemology, including critical pragmatism and feminist standpoint theory. It highlights positionality, reflexivity, and epistemic pluralism. Generative systems like ChatGPT or GitHub Copilot often produce fluent output without disclosing sources or normative assumptions. Their operation relies on stochastic pattern completion, which may conflate credibility with fluency and marginalize situated knowledge. While explainable AI seeks to address opacity (Samek et al., 2019), such approaches often leave interpretive concerns unresolved.8 Five epistemic risks emerge: (1) opacity from non-transparent models, (2) epistemic narrowing through goal optimization, (3) misrecognition from cultural mismatches, (4) de-responsibilization by outsourcing evaluation, and (5) conformity via feedback loops that reward alignment. Systems like Gradescope, which use rubrics and pattern recognition to grade essays, may reinforce homogeneity by penalizing divergence from trained templates.
A broader view of educational AI considers it as a co-actor in pedagogical contexts. This includes philosophical reflection, participatory design, and pedagogical theory to ensure that AI serves not only efficiency but also epistemic development. Fairness intersects with epistemic justice by focusing on the distribution of conditions for knowing. Klafki (1996) emphasizes judgment, solidarity, and epistemic agency as educational goals. These perspectives imply methodological consequences. The evaluation of AI systems involves an epistemology of critique that combines empirical analysis with philosophical inquiry. As Haraway (1988), Harding (1991), and Feyerabend (1975) argue, critique is always situated and partial. Philosophical work on educational AI attends to the material-discursive settings in which these technologies are deployed. In this sense, epistemic agency and democratic participation are not secondary to technical performance but are central to understanding education under digital conditions. Questions arise: What knowledge is validated? What agency is made possible? What subjectivities are shaped through AI systems? These reflections on fairness, epistemic injustice and agency indicate that what is required is not only critique but a situated ethical vocabulary capable of evaluating AI systems along distinct dimensions such as recognition, contestability, reflective autonomy and institutional responsiveness. The next section therefore establishes the conceptual and pedagogical foundations required to situate these criteria, before turning to their systematic articulation.

4. Reframing AI as a Constitutive Pedagogical Actor in Democratic Education

This section synthesizes the previous ethical tensions and shifts the focus from evaluation to design orientation, asking how AI systems actively participate in shaping pedagogical relations rather than merely supporting them. The aim is to reinterpret AI not as an external tool but as a co-constitutive actor within educational infrastructures. The role of AI in education cannot be reduced to questions of technical accuracy or functional support. Algorithmic systems intervene in how knowledge is structured, how interaction unfolds, and how subjectivities take shape. These systems are not merely background tools; they are embedded in institutional practices and epistemic frameworks that affect what can be learned, by whom, and under which conditions. This section draws together the preceding analyses to explore how AI might be understood, designed, and governed as a pedagogically consequential actor within democratic educational settings.

4.1. From Procedural Ethics to Formative Intentionality: Reframing AI’s Role in Educational Subjectivation

To move beyond risk mitigation and procedural oversight, this subsection introduces the concept of formative intentionality, arguing that AI systems should be evaluated by their contribution to subject formation rather than mere efficiency or compliance. Contemporary AI ethics has focused predominantly on harm reduction: minimizing bias, enhancing transparency, securing privacy. These principles form a necessary baseline for assessing system impact. In educational contexts, however, ethical reasoning extends beyond risk mitigation. Educational institutions are spaces of formation, where learners engage with judgment, autonomy, and reflexivity.9 Ethical AI in education is therefore approached here in relation to its potential to contribute to the formation of learners as epistemic and moral subjects. This implies a shift from instrumental rationality to what is described as formative intentionality. AI systems are not only evaluated by adherence to procedural norms, but also by the extent to which they support processes involving self-reflection, epistemic difference, and the recognition of learners as agents. A crucial distinction is necessary at this point. Whereas “learning” in AI discourse is often framed in terms of adaptive output, goal alignment and behavioral optimization, a logic closely aligned with what Foucault (1979) describes as governmental rationality, education in the sense of Bildung refers, following Klafki (1996) and Biesta (2006), to a formative process in which subjects come to relate reflectively to themselves, to others and to the world. This difference is not merely semantic but directly affects how teaching objectives are articulated under AI conditions, determining whether they are reduced to performance metrics or oriented toward the cultivation of judgment, autonomy and democratic agency. Bildung is here understood in Klafki’s (1996) sense as the unfolding of self and world, shaped by interpretation, openness, and relational understanding (Köhler, 2003). This aligns with Fricker’s (2007) notion of epistemic virtue and Nussbaum’s (2000) view of education as moral development. It resonates with the capability approach, which defines freedom not only as absence of constraint but as real opportunity for thought, action, and self-determination (Nussbaum, 2000; Sen, 1999). AI-mediated learning is thus framed as a situated practice where learners develop interpretive agency and articulate evaluative positions.
A more precise formulation is needed to articulate the conditions under which an AI system can genuinely support such processes of formation. Dialog-based AI drawing on large language models may pose open-ended prompts or moral dilemmas. These systems can offer feedback that engages learners in interpretive deliberation. Adaptive environments may introduce epistemically plural resources to expand reasoning styles. These are not prescriptions but indications of how design intersects with pedagogical purpose. The idea of ethical formation rests on epistemological assumptions that envision learners as moral subjects, shaped by traditions such as Kantian autonomy (Kant, 1785/1998), virtue epistemology, and the capability approach. These frameworks are not uncontested. Poststructuralist, decolonial (e.g., Spivak, 1988), and Foucauldian perspectives (Foucault, 1979) redirect focus to processes of subjectivation and positioning. Ethical formation thus acknowledges a plurality of educational philosophies without reducing them to a single horizon.
Formative intentionality differs from behaviorist shaping or nudging (Thaler & Sunstein, 2008). It is a design orientation that sees technologies as participants in moral and epistemic development. Such a stance assumes dialogical conditions and the capacity for learners to position themselves critically. It implies practices that enable negotiation, questioning, and perspective-taking, such as deliberative interfaces, narrative feedback, or participatory scaffolding. This inquiry operates at the intersection of ethics, educational theory, and design research. Its methodology draws on reflective equilibrium, integrating moral reasoning, empirical attentiveness, and pedagogical insight. It builds on critical theory and feminist epistemology, including Habermas (2022), Haraway (1988), and Harding (1991), who emphasize situated, dialogically constituted, and socially embedded justification. Technologies are embedded in infrastructures that shape possible actions and meanings. Ethical formation requires attention to the sociotechnical conditions in which educational AI is developed and used. Curricular norms, data governance, and access structures influence what kinds of engagement emerge.
Within the broader arc of this paper, the shift from compliance to ethical formation reflects a conceptual movement: from Responsible AI as procedural accountability to Ethical AI as attentiveness to the conditions under which subjectivity, reflexivity, and participation can unfold. This dual perspective invites reflection on systems that are both governable and generative, shaped through oversight and responsive to diverse educational subjectivities, not as assumptions, but as outcomes of ongoing dialog among theory, practice, and situated use.

4.2. AI and the Architecture of Epistemic Access

Extending the focus on formative intentionality, this subsection examines how AI systems structure access to knowledge and thereby shape the epistemic conditions under which such formation becomes possible. As AI systems mediate access to knowledge, regulate interactions, and participate in assessments, they shape epistemic hierarchies: determining what counts as legitimate knowledge, which discursive styles are valorized, and whose perspectives are foregrounded or silenced. In this sense, AI does not simply distribute information but co-constructs the conditions of knowing. Recent reviews show that AI-mediated systems for competency assessment tend to operationalize narrow models of learning, potentially overlooking pluralistic conceptions of knowledge (Hummel et al., in press). To be ethically meaningful, educational AI requires a politics of recognition. Recognition here is understood not as polite acknowledgment, but as a constitutive precondition of epistemic agency (Egger, 2008). Drawing on Honneth’s (1992/1994) theory of recognition and Fricker’s (2007) notion of epistemic injustice, such systems would affirm learners not only as recipients of knowledge but as credible knowers, whose interpretations and reasoning styles are valid contributions to the learning process. This entails more than fair outcomes or inclusive datasets; it involves interaction architectures that accommodate epistemic pluralism and allow learners to see themselves and be seen as authoritative participants. From a design-theoretical standpoint, this might include interface elements that enable multiple modalities of expression, dialogic scaffolds, or algorithmic structures that refrain from normalizing dominant discourses.
Empirical research on adaptive learning and AI-mediated feedback shows that subtle shifts in framing can affect how learners position themselves epistemically. Learners may experience partial recognition, friction, or ambivalence that influence their engagement. Such variability resists reduction to dichotomies and calls for a more granular vocabulary of epistemic experience. Yet the idea of recognition invites critical examination. What epistemic subject is being presupposed? The framing draws on ideals of autonomy and rationality (Heidegger, 1946/1976), but decolonial and poststructuralist critiques remind us that not all learners are granted equal access to these subject positions. As Spivak (1988) argues, recognition is situated within global structures of voice and silencing. Designing for recognition thus also means designing against epistemic erasure. It becomes necessary to interrogate how epistemic authority is established and sustained. Rather than assuming recognition as a stable response, it may be more productive to ask how algorithmic infrastructures configure legitimacy and credibility. The question is not only who is recognized, but by whom, under which conditions, and through what mediations. This shift opens space for examining how subject positions are algorithmically produced, how hermeneutical gaps are widened or narrowed, and how testimonial credibility is scored or denied through backend parameters.
Recognition-based approaches also prompt a rethinking of fairness beyond distributive models. Whereas conventional fairness metrics aim for statistical parity, a recognition lens emphasizes the conditions under which epistemic differences become legible. Justice here involves not only parity of outcome but inclusivity of interpretation. A system can accommodate difference only if its structures allow divergent worldviews, languages, and logics to emerge without epistemic penalty. This orientation aligns with the capability approach’s concern for real freedoms and with Habermasian ideals of dialogic inclusion. At the same time, the methodological stance of this argument must be made explicit. This section operates at the intersection of philosophy, design critique, and educational theory. Its orientation is not purely analytical or empirical, but situated within a reflexive ethics of practice, akin to Haraway’s situated knowledges and Harding’s standpoint epistemology. This positioning foregrounds not abstract generality but partial perspective, not neutrality but epistemic accountability. The argument does not presume an external vantage point but draws legitimacy from the triangulation of theoretical insight, educational relevance, and critical scrutiny.
Designing for epistemic justice is inseparable from institutional and infrastructural conditions. AI systems are embedded in curricular frameworks, platform economies, and policy regimes that delimit what forms of recognition are thinkable. Efforts toward epistemic justice must therefore address not only system behavior, but also the environments in which those systems are embedded. Realigning AI with educational inclusion entails acknowledging these structural entanglements rather than bypassing them through isolated interventions. In this sense, the question of epistemic access anticipates the first criterion of a situated ethics of AI in education, namely recognition as a condition for epistemic agency and democratic participation.

4.3. Democratic Participation and the Ethics of Co-Agency

After addressing access and recognition, this subsection focuses on co-agency as a criterion for democratic participation, analyzing how learners might meaningfully intervene in or reshape AI-driven decision structures rather than remaining passive recipients of algorithmic guidance. Democratic education aims not only to impart knowledge, but to cultivate civic agency. If AI is to support this aim, it functions not as a directive apparatus but as a facilitator of co-agency. This implies a reconceptualization of learners not as passive recipients of instruction, but as epistemic and political agents capable of co-determining the terms of their education (Bartok et al., 2023; Donner & Hummel, in press; Hummel et al., 2025b, 2025c; Köhler, 2003). This draws on Rawlsian notions of justice as fairness (Rawls, 1971) and Sen’s (1999) idea of development as freedom. It involves designing AI systems that remain pedagogically open and contestable. Learners, educators, and developers can jointly shape the values, objectives, and evaluative logics that govern algorithmic systems in education. This may include algorithmic literacy, participatory design, and institutional mechanisms for feedback and revision. Philosophically, this involves viewing AI not as a fixed technological object, but as a relational artifact whose ethical significance emerges through situated use and interpretive appropriation (Dignum, 2019; Jonas, 1984). The notion of co-agency developed here differs from concepts such as distributed agency (Hutchins, 1995), relational autonomy (Mackenzie & Stoljar, 2000), or agential realism (Barad, 2007). Co-agency refers to the contingent, asymmetrical interplay between human and machine actors within institutional frames. It points to structured possibilities for initiating, contesting, or resignifying action in AI-mediated environments. Experiences of agency in such contexts are rarely stable or unambiguous. Hummel and Donner (2023) observe that students often experience AI systems as opaque, affecting their interpretive freedom. Agency may manifest as compliance, interface appropriation, quiet resistance, or affective withdrawal, forms often overlooked in functionalist accounts. These dynamics are documented in educational research (Bulfin et al., 2015) and critical data studies (Amoore, 2020), where learner responses oscillate between adaptation and friction.
The idea of democratic participation intersects uneasily with AI’s epistemic and operational logics. Automation and optimization frequently contrast with the open-ended, deliberative character of education. As Eubanks (2018) notes, systems designed for procedural efficiency may suppress contestation and marginalize plural voices, especially when default values remain implicit. Designing for co-agency may involve friction: mechanisms that interrupt automation, foreground contingency, and sustain epistemic multiplicity. These include refusal interfaces (Akama et al., 2020), recursive feedback loops, or scaffolds that create interpretive pause. The objective lies less in simplistic transparency than in fostering reflexive awareness of contestability. Participation is neither evenly distributed nor always accessible. Spivak (1988) emphasizes that recognition and voice are conditioned by histories, infrastructures, and normative codes. Participation may be formally enabled yet structurally foreclosed. Designing for co-agency takes into account exclusions generated through linguistic coding, infrastructural constraints, or bureaucratic opacity. This tension remains unresolved by interface design alone. It involves attention to the political economy of educational platforms: how governance decisions, procurement logics, and algorithmic parameters shape pedagogical possibility and delimit epistemic space.
Methodologically, this section moves between conceptual analysis and design critique. It draws on theories of justice and capability, as well as on science and technology studies, feminist epistemology, and postcolonial critique. These positions do not converge on a unified account of participation but offer complementary lenses on the contested dynamics of agency in AI-mediated learning. AI systems in democratic education do not operate as neutral facilitators. They are already implicated in political and epistemic architectures. The question is not whether learners possess agency, but how that agency is constructed, mediated, and rendered actionable. Designing for co-agency becomes a matter of conceptual discernment and contextual sensitivity, attuned to contradiction, ambiguity, and the lived texture of educational experience. In this perspective, the question of democratic participation anticipates the second criterion of a situated ethics of AI in education: contestability affordances, understood as the capacity of learners and educators to challenge, redirect or meaningfully co-shape algorithmic decisions within institutional boundaries.

4.4. Generative Ambiguity: Toward a Situated Ethics of Educational AI Systems

Rather than resolving the tension between Ethical AI and Responsible AI, this subsection proposes generative ambiguity as a productive ethical stance that keeps pedagogical spaces open for negotiation, contestation and the articulation of plural value orientations. Ethical AI provides normative precision, conceptual structure, and philosophical depth. Responsible AI brings attention to contextual variability, stakeholder participation, and procedural flexibility. Both logics remain essential in educational contexts, where normative assumptions are integral to how learning is organized and interpreted. The relationship between these two frameworks cannot be resolved through a simple merger. Their coexistence invites an ethics that is dialogical in form and situated in orientation. Such an ethics resists convergence on universal principles and equally resists surrendering coherence to pure relativism. It operates in the space between the prescriptive and the participatory, algorithmic consistency and pedagogical disruption, epistemic standards and experiential plurality. In educational settings, where knowledge is interpreted, contested, and remade, these tensions are constitutive. The point is not to eliminate friction between Ethical and Responsible AI, but to work within it, drawing from traditions of reflective equilibrium (Rawls, 1971), dialogic rationality (Habermas, 2022), and situated knowledge practices (Haraway, 1988; Harding, 1991). These traditions suggest that ethical clarity arises from engaging ambiguity with intellectual care and institutional responsiveness. Rather than treating this ambiguity as a design flaw, one might view it as an epistemic affordance. Friction between competing normative expectations can sharpen ethical perception. It can prompt reconsideration of who is implicated in AI systems, what assumptions guide their implementation, and how educational values are encoded in technical artifacts. Understood this way, generative ambiguity becomes a resource for pedagogical imagination. Ambiguity is not limited to semantic vagueness or conceptual looseness. It can be epistemic, as in the plurality of truth claims; normative, as in conflicting value frameworks; or performative, as in the enactment of agency across human and non-human actors. Its generativity lies in its capacity to defer closure, to stimulate ethical reflection, and to keep participation open. Rather than claiming universality, this orientation draws on traditions such as care ethics (Held, 2006), pragmatist ethics (Dewey, 1916), and virtue ethics (MacIntyre, 2007), while emphasizing dialogical openness and situated articulation.
Such a stance has methodological consequences. It demands an approach to ethics that remains responsive to contested interpretations, hybrid use cases, and evolving practices. Instead of aiming for closure, it encourages iterative articulation, layered accountability, and cross-disciplinary dialog. This does not imply a loss of philosophical rigor. It shifts the emphasis from foundationalism to orientation, from finality to responsiveness, from universality to situatedness. This orientation can be seen in AI-driven learning environments that integrate dialogic feedback across multiple epistemic registers, allowing learners to navigate not only different contents, but different norms of reasoning. Projects such as “Education Reimagined” or ethical AI prototyping initiatives at the Connected Intelligence Centre (CIC) offer examples where divergent worldviews are made co-present, surfacing friction as a mode of reflection. Designing AI systems for education under these conditions becomes a matter of positioning rather than prescription. Ethical meaning is not imposed from outside, nor fully specified in advance. It emerges through engagements that are pedagogical, institutional, and political.
For designers and engineers, this raises questions of translation: how can the ethos of generative ambiguity inform technical and institutional practice? One direction lies in participatory AI design, where normative uncertainty becomes an invitation to collaborative exploration. Practices of slow prototyping, iterative reflection, and feedback-rich governance may serve as viable modes through which ambiguity is rendered productive. Such approaches have been explored in industry contexts, where Responsible AI is treated as an iterative design challenge embedded in organizational routines (Benjamins et al., 2019). These approaches resonate with Biesta’s (2006) view of education as a domain of interruption rather than instruction, and with Klafki’s (1996) concept of Bildung as an open process of world- and self-relation. In this light, the future of educational AI is not reducible to technical evolution. It is equally a philosophical and civic undertaking, dependent on the capacity to inhabit ethical complexity without simplifying it.10 The ambiguity between Ethical and Responsible AI is not a temporary inconvenience. It is a constitutive feature of any system that supports learning while remaining attuned to the plurality of human values and educational aspirations. Rather than signaling ethical indecision, such ambiguity can be read as a productive interval in which ethical meaning has not yet been settled but remains available for situated articulation. It is precisely in this interval that a situated ethics finds its locus of operation, not as a final resolution but as an ongoing practice of orientation within contingent educational futures.

5. Implications for Practice

This section translates the conceptual distinctions and ethical criteria developed above into concrete pedagogical and institutional orientations. Rather than remaining at the level of abstract critique, the aim here is to demonstrate how a situated ethics approach can shape decision-making in educational AI under real institutional conditions. The analysis of Ethical and Responsible AI has demonstrated that philosophical distinctions are not abstract exercises but carry direct consequences for educational practice. If AI systems are to support democratic education, they must be designed, implemented and governed in ways that reflect normative commitments to autonomy, fairness and epistemic justice. This involves translating conceptual insights into pedagogical orientations, institutional routines and governance mechanisms that shape everyday educational decision-making.
First, the challenge of personalization underscores that autonomy must be conceived as reflective self-determination rather than algorithmic optimization. In practice, this entails designing AI systems that open spaces for interpretation, choice and contestation rather than simply steering learners toward predefined goals. Educators can amplify this orientation by making adaptive feedback pedagogically transparent and by inviting critical engagement with its underlying metrics. Second, fairness in educational AI cannot be reduced to statistical parity or performance-based metrics. Institutions require evaluative procedures capable of recognizing epistemic plurality and preventing the marginalization of non-dominant learning styles or cultural registers. Embedding algorithmic literacy within curricula enables learners to interrogate fairness not as a fixed technical parameter but as a contested normative horizon. Third, democratic participation requires a shift from top-down governance to relational co-agency. This can take the form of participatory design workshops, iterative feedback mechanisms for adjusting system parameters and institutional structures that remain responsive to learner input. Such practices move beyond procedural compliance and cultivate deliberative engagement with AI systems.
Fourth, a situated ethics of educational AI implies that evaluation must remain pluralist, layered and ongoing. Hybrid approaches that combine technical audits, ethical reflection and situated inquiry into learners’ lived experiences offer more than compliance, they enable reflective institutionality. Ethics review boards, pedagogical evaluation frameworks and dialogic feedback infrastructures can extend accountability towards epistemic justice, reflective autonomy and democratic co-agency.
These implications converge in the recognition that AI systems are not external instruments but constitutive actors within educational practice. Their ethical significance emerges not only from technical design but from how they are embedded in institutional and pedagogical structures. In this sense, the following microcases do not function as examples but as situated operationalizations of the paradigm tensions articulated in Table 1 and the partial normative optics outlined in Table 2. Each case brings into focus where these ethical lenses begin to strain when translated into concrete educational configurations, marking the point at which a situated evaluative orientation becomes necessary. A practice-oriented consolidation of these diagnostics follows later.

5.1. Microcase 1—Algorithmic Grading

The first microcase explores algorithmic grading as a site where epistemic authority, fairness and learner agency intersect. It illustrates how Ethical AI, Responsible AI and situated ethics each generate distinct evaluative orientations when predictive scoring begins to displace human judgment.
Ethical AI: Under an Ethical AI lens, algorithmic grading is evaluated according to principles of fairness, autonomy and non-manipulation. The main concern is whether such systems treat learners as rational agents whose academic performance should be judged transparently and without bias. Ethical AI would require that evaluative criteria are explicit, that no group is systematically disadvantaged and that learners’ dignity as autonomous subjects is preserved when predictive scoring replaces human judgment.
Responsible AI: A Responsible AI perspective reframes grading as a governance issue rather than only a question of moral correctness. It asks who defines grading parameters, who can access or challenge evaluative criteria and how procedural oversight integrates diverse stakeholder voices. Here, legitimacy stems not only from just outcomes but from inclusive and accountable decision-making structures embedded within institutional frameworks.
Situated ethics: Instead of treating grading as a technical fairness problem, situated ethics recasts it as a site where epistemic authority itself becomes negotiated rather than merely administered. It does not simply call for inclusive procedures but insists that evaluation be recognized as a contested epistemic space rather than a finalized assessment routine. A situated ethics approach demands that learners are not only protected from bias but can actively articulate and contest the interpretive assumptions embedded in grading logic. Here, recognition becomes a condition for being seen as a credible knower, while contestability affordances enable learners to re-enter the evaluative loop not as subjects of assessment but as co-authors of what counts as academic merit (see Table 3).
Under situated ethics, this case foregrounds (a) Recognition and (b) Contestability, revealing precisely where the paradigm tension in Table 1 intersects with the normative limits of the ethical optics outlined in Table 2.

5.2. Microcase 2—AI-Mediated Admissions Systems

While grading foregrounds the internal dynamics of assessment, admissions processes expose the institutional politics of selection and classification. This second microcase shows how evaluative frameworks shift once AI-mediated admissions are approached not merely as technical optimization, but as a negotiation over who is recognized as a potential learner.
Ethical AI: From the perspective of Ethical AI, automated admissions systems are assessed in terms of fairness and non-discrimination. The primary concern is whether algorithmic decision-making optimizes for merit without bias and whether applicants are treated consistently according to transparent and universally applicable criteria.
Responsible AI: A Responsible AI approach reframes admissions as a governance issue rather than only a question of moral correctness. It asks who sets admission thresholds, who can access or challenge the underlying criteria and how procedural oversight integrates diverse stakeholder voices.
Situated ethics: Situated ethics expands the normative frame by shifting admissions from a distributive logic of fairness to a reflexive negotiation over how educational legitimacy is institutionally constructed. Rather than treating selection as an issue of procedural correctness, it foregrounds how admissions systems actively shape what counts as educational potential and who is recognized as a legitimate future learner. Reflective autonomy involves not only being evaluated fairly but understanding and interrogating how one’s profile is constructed through data. Institutional responsiveness becomes relevant where the admissions infrastructure remains open to revision in light of contestation rather than stabilizing predefined metrics (see Table 4).
Here, (c) Reflective Autonomy and (d) Institutional Responsiveness become decisive, exposing how the governance logics mapped in Table 1 strain against the institutional blind spots indicated in Table 2.

5.3. Microcase 3—Adaptive Mentoring Systems

In contrast to grading and admissions, adaptive mentoring operates at the granular level of everyday learning interaction.11 This microcase demonstrates how personalization technologies reconfigure autonomy depending on whether they are framed through Ethical AI, Responsible AI or situated ethics, and how these framings affect the learner’s epistemic position.
Ethical AI: From an Ethical AI perspective, adaptive mentoring systems are evaluated in terms of whether their personalization logic respects learner autonomy and avoids manipulative nudging. The issue is framed as a matter of individual rights.
Responsible AI: A Responsible AI approach shifts the emphasis toward transparency and procedural accountability in personalization. It asks who can interrogate or modify personalization rules and how institutional stakeholders govern the evolution of these systems.
Situated ethics: Under a situated ethics perspective, personalization is no longer understood as adaptive efficiency but as a pedagogical moment in which subjectivity is formed through the negotiation of interpretive agency. Seen this way, adaptive mentoring systems do not simply assist learning but actively configure the epistemic conditions under which learners can appear as agents of their own understanding (Egger, 2008). Reflective autonomy becomes visible where learners can articulate why certain learning directions resonate with their evaluative standpoint. Recognition enters where the system affirms these self-ascribed justifications as epistemically valid contributions rather than reducing them to behavioral signals (see Table 5).
In this case, (a) Recognition and (c) Reflective Autonomy converge, showing how subject formation under algorithmic conditions exceeds both the procedural and principle-based frames of Table 1 and the ethical optics in Table 2, a tension that is taken up in the situated evaluative architecture later developed in Table 6.
This synthesis shows that no single paradigm is sufficient for guiding educational AI practice. Ethical AI provides normative clarity, Responsible AI ensures procedural responsiveness and situated ethics integrates both by foregrounding recognition, contestability, reflective autonomy and institutional responsiveness as criteria for democratic and epistemically grounded AI in education. The task for practitioners is not to apply these paradigms sequentially but to sustain their productive tension within the governance of educational AI. In this sense, the situated ethics framework developed here functions not merely as a theoretical synthesis but as an evaluative model through which institutions can align AI systems with educational objectives oriented toward subject formation, epistemic agency and democratic participation.

6. Conclusions: Reframing Ethical AI as Situated Educational Practice

The preceding analysis has traced the conceptual tensions between Ethical AI and Responsible AI not as a problem to be solved, but as a site of productive ambiguity, a generative space where ethical reasoning, design, and educational theory intersect. As outlined in the previous section, these tensions also carry concrete implications for practice, shaping how autonomy, fairness, democratic participation, and epistemic justice can be embedded into institutional and pedagogical routines. Rather than offering a unified framework, the paper has proposed a situated ethics attentive to normative conflict, institutional embeddedness, and pedagogical plurality.
Ethical AI brings philosophical depth, definitional clarity, and evaluative structure. Responsible AI foregrounds governance, participation, and responsiveness. In educational contexts, these orientations are not mutually exclusive but differently positioned responses to the question of how learning environments can be shaped: by whom, under what conditions, and toward which horizons of meaning. Rather than reconciling these paradigms through abstraction, the argument has remained situated in the epistemic, institutional, and civic complexities of educational systems. Concepts such as epistemic justice, co-agency, dialogical design, and generative ambiguity have served to rearticulate the ethical task: not as the formulation of universal standards, but as the cultivation of reflective practices capable of navigating ambiguity without reducing it. From this perspective, ethical AI in education is not a label applied to systems post hoc, but an orientation enacted in design, use, and critique. It takes shape where friction is sustained rather than erased, where learners are treated as epistemic agents rather than data subjects, and where educational technologies are viewed as contested spaces of meaning-making. This orientation does not presuppose consensus. It opens the question of which forms of disagreement can be pedagogically and institutionally sustained.
This account of situated ethics is deliberately distinct from more principle-driven approaches such as virtue ethics or deontological models. While those traditions offer valuable resources, they often rely on assumptions about stable moral agents and clearly delineated duties—assumptions that may not hold in the distributed agency of AI-mediated learning environments. Nor does this approach align neatly with care ethics, although it engages similar concerns around relationality. The orientation articulated here resonates more closely with feminist standpoint theory (Harding, 1991), pragmatist ethics (Dewey, 1916), and critical theories of dialogic rationality (Habermas, 2022), where ethical reflection arises within, not prior to, situated practice. At the same time, the notion of generative ambiguity benefits from conceptual clarification. The ambiguity addressed here is not reducible to vagueness. It is epistemic, as AI systems often bring into contact differing modes of knowing: normative, in that they mobilize divergent notions of justice, agency, and legitimacy, and performative, insofar as these tensions influence how educational processes are structured and interpreted. Generativity, in this context, does not indicate resolution, but the opening of possibilities for reflexive engagement. Ethical articulation proceeds not by dissolving these tensions, but by engaging with them: through design, critique, and participatory dialog.
Future work may further explore how situated ethics can inform the design, governance, and iteration of AI systems in education. This includes empirical studies on system use, conceptual analysis of how infrastructures encode educational priorities, and participatory formats involving learners and educators as co-constructors. AI in education is not solely a technological field but also a philosophical one, concerned with value, agency, and civic formation. Learning platforms that support epistemic plurality—for instance, dialogic tools enabling interpretive multiplicity or feedback mechanisms co-developed with diverse learner groups—illustrate how ethical tensions can be made visible and open to negotiation. A situated ethics of AI remains closely tied to its educational dimension. Education, understood as the formation of subjectivity in relation to others and the world (Biesta, 2006; Jörissen & Marotzki, 2009; Klafki, 1996), cannot be captured by optimization metrics. It unfolds through complexity, interruption, and encounter. Designing with this in view calls for ethical approaches that remain responsive to ambiguity. In this sense, the framework developed here is not proposed as a final solution but as an ongoing orientation that keeps recognition, contestability, reflective autonomy and institutional responsiveness in view as ethical coordinates for navigating AI in HE. This alignment is made explicit in Table 6, which operationalizes these four criteria into an evaluative matrix that can guide institutional and pedagogical decision-making under AI conditions. The tension between Ethical and Responsible AI is not an analytical flaw but a precondition for ethically meaningful design.
Ethical reflection in educational AI does not end with the distinction between Ethical AI and Responsible AI. What emerges instead is a space of ongoing negotiation in which both paradigms continue to unsettle and refine each other. The situated ethics proposed in this paper is not intended as a final model but as a practical orientation for educators, developers and institutions who work with AI under real conditions of tension, uncertainty and plural expectations. Its value lies in keeping open the question of what AI should do in education and for whom it should matter. By grounding evaluation in the four criteria of recognition, contestability, reflective autonomy and institutional responsiveness, the framework offers a concrete vocabulary for assessing AI systems not only in terms of technical performance or procedural fairness, but in relation to their role in shaping epistemic agency and democratic participation. These criteria do not prescribe uniform answers; they act as lenses through which AI systems can be interrogated, questioned and realigned with educational purposes. If taken seriously, they invite a shift from viewing AI as a finished product to treating it as a pedagogical and institutional relationship that must remain open to revision. In this sense, the ethical task is not to stabilize a universal standard, but to cultivate practices that can hold ambiguity without collapsing it, that can treat disagreement as a resource rather than a failure, and that can acknowledge learners and educators as co-authors of AI’s role in education. Framed this way, ethical AI becomes less a compliance category and more a form of ongoing educational responsibility: a commitment to designing and governing AI in ways that sustain agency, invite interpretation and keep alive the possibility of shared judgment. If the argument leads to one concluding stance, it is this: ethics in educational AI does not begin in certainty, but in a willingness to remain accountable to ambiguity. The challenge ahead is thus not merely technical or regulatory, but educational: to cultivate institutional cultures capable of holding ethical ambiguity as a space of shared reflection.

Funding

The APC was funded by the Open Access Publication Fund of the SLUB/TU Dresden.

Institutional Review Board Statement

Institutional Review Board Statement is waived because the submitted manuscript Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning is a conceptual and theoretical study. It does not involve human subjects, human participants, personal data, animals, or biological material, and therefore no approval from an ethics committee was required.

Informed Consent Statement

Informed Consent Statement is waived because the submitted manuscript Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning is a conceptual and theoretical study. It does not involve human subjects, human participants, personal data, animals, or biological material, and therefore no approval from an ethics committee was required.

Data Availability Statement

The article is based solely on conceptual, theoretical, and normative analysis. Therefore, no datasets are associated with this research.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
HEHigher Education

Notes

1
While recent analyses have focused on the normative foundations of explainability and justification in machine learning systems (Jongepier & Keymolen, 2022), this article extends the discussion to the educational domain, where AI functions not merely as a technical artefact but as a co-constitutive epistemic agent in shaping learners’ access to recognition and knowledge.
2
Roll and Wylie (2016) trace the historical evolution of AI in education, noting a growing tension between innovation-driven development and pedagogical intentionality.
3
Scanlon (1998) emphasizes that moral justification to others lies at the heart of ethical reasoning—a view that aligns closely with participatory approaches to AI design. Building on this, Pauer-Studer (2023) stresses the importance of procedural legitimacy and justificatory reciprocity as normative foundations for decision architectures. In the context of educational AI, this resonates with Dignum’s account of responsible autonomy, which highlights the ethical complexity that arises when algorithmic decisions intersect with human accountability and frames AI ethics as a plural field in which competing traditions of normative reasoning co-exist, often with implicit tensions (Dignum, 2018).
4
A meta-inventory of human values, such as that proposed by Cheng and Fleischmann (2010), can support this analysis by surfacing implicit ethical assumptions in design.
5
This aligns with media-educational perspectives that frame Bildung as a dynamic process of identity and subject formation in technologically mediated environments (Jörissen & Marotzki, 2009).
6
Earlier work by Binns (2018) highlights that algorithmic fairness must be understood within broader political-philosophical traditions, which often diverge on the meaning of justice and equality.
7
West et al. (2019) show how such algorithmic systems often reinforce gendered and racialized power structures even in ostensibly neutral domains like education.
8
Efforts in explainable AI, such as visualization techniques for deep learning models, aim to mitigate this epistemic opacity but often fall short of addressing deeper interpretive concerns (Samek et al., 2019).
9
Selwyn (2019) cautions against framing AI as a replacement for educators, arguing that such imaginaries often sideline the formative, relational dimensions of teaching.
10
This resonates with Spiekermann-Hoff’s (2015) call for value-based system design, which links technical architectures to broader ethical commitments.
11
While much of the AI literature refers to “intelligent tutoring systems”, the term “mentoring” is used here in line with the relational and recognition-oriented account of pedagogical accompaniment developed in Donner and Hummel (in press), where mentoring is conceived not as adaptive instruction but as a formative space of co-agency and epistemic emergence.

References

  1. Akama, Y., Light, A., & Kamihira, T. (2020). Expanding participation to design with more-than-human concerns. In Proceedings of the 16th participatory design conference 2020—Participation(s) otherwise (Vol. 1, pp. 1–11). Association for Computing Machinery. [Google Scholar] [CrossRef]
  2. Amoore, L. (2020). Cloud ethics: Algorithms and the attributes of ourselves and others. Duke University Press. [Google Scholar] [CrossRef]
  3. Arendt, H. (1958). The human condition. University of Chicago Press. [Google Scholar]
  4. Baker, R. S., & Hawn, A. (2021). Algorithmic bias in education. Computers & Education: Artificial Intelligence, 2(1), 100025. [Google Scholar] [CrossRef]
  5. Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press. [Google Scholar] [CrossRef]
  6. Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and machine learning: Limitations and opportunities. MIT Press. Available online: https://fairmlbook.org/pdf/fairmlbook.pdf (accessed on 30 October 2025).
  7. Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. [Google Scholar] [CrossRef]
  8. Bartok, L., Donner, M.-T., Ebner, M., Gosch, N., Handle-Pfeiffer, D., Hummel, S., Kriegler-Kastelic, G., Leitner, P., Tang, T., Veljanova, H., Winter, C., & Zwiauer, C. (2023). Learning analytics—Studierende im fokus. Zeitschrift für Hochschulentwicklung: ZFHE; Beiträge zu Studium, Wissenschaft und Beruf, 18, 223–250. [Google Scholar] [CrossRef]
  9. Benjamins, R., Barbado, A., & Sierra, D. (2019). Responsible AI by design in practice. arXiv, arXiv:1909.12838. Available online: https://arxiv.org/abs/1909.12838 (accessed on 30 October 2025).
  10. Benossi, L., & Bernecker, S. (2022). A Kantian perspective on robot ethics. In H. Kim, & D. Schönecker (Eds.), Kant and artificial intelligence (pp. 147–168). De Gruyter. [Google Scholar]
  11. Bentham, J. (1996). The collected works of Jeremy Bentham: An introduction to the principles of morals and legislation. Clarendon Press. [Google Scholar]
  12. Biesta, G. (2006). Beyond learning: Democratic education for a human future. Paradigm Publishers. [Google Scholar]
  13. Binns, R. (2018, February 23–24). Fairness in machine learning: Lessons from political philosophy. 2018 Conference on Fairness, Accountability, and Transparency (FAT) (pp. 149–159), New York, NY, USA. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3086546 (accessed on 27 September 2025).
  14. Bulfin, S., Johnson, N. F., & Bigum, C. (2015). Critical is something others (don’t) do: Mapping the imaginative of educational technology. In S. Bulfin, N. F. Johnson, & C. Bigum (Eds.), Critical perspectives on technology and education (pp. 1–16). Palgrave Macmillan. [Google Scholar] [CrossRef]
  15. Cheng, A., & Fleischmann, K. R. (2010). Developing a meta-inventory of human values. Proceedings of the American Society for Information Science and Technology, 47, 1–10. [Google Scholar] [CrossRef]
  16. Clarke, R. (2019). Principles and business processes for responsible AI. Computer Law & Security Review, 35(4), 410–422. [Google Scholar] [CrossRef]
  17. Code, L., Harding, S., & Hekman, S. (1993). What can she know? Feminist theory and the construction of knowledge. Hypatia, 8(3), 202–210. [Google Scholar]
  18. Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press. [Google Scholar] [CrossRef]
  19. Daly, A., Hagendorff, T., Hui, L., Mann, M., Marda, V., Wagner, B., & Wang, W. W. (2021). AI, governance and ethics: Constitutional challenges in the algorithmic society. Cambridge University Press. [Google Scholar] [CrossRef]
  20. Dewey, J. (1916). Democracy and education: An introduction to the philosophy of education. Macmillan. [Google Scholar]
  21. Dierksmeier, C. (2022). Partners, not parts. Enhanced autonomy through artificial intelligence? A Kantian perspective. In H. Kim, & D. Schönecker (Eds.), Kant and artificial intelligence (pp. 239–256). De Gruyter. [Google Scholar]
  22. Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the special issue. Ethics and Information Technology, 20(1), 1–3. [Google Scholar] [CrossRef]
  23. Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer. [Google Scholar] [CrossRef]
  24. Donner, M.-T., & Hummel, S. (in press). Systematic literature review of AI-mediated mentoring in higher education. In H.-W. Wollersheim, T. Köhler, & N. Pinkwart (Eds.), Scalable mentoring in higher education. Technological approaches, teaching patterns and AI techniques. Springer VS.
  25. Egger, R. (2006). Gesellschaft mit beschränkter Bildung. Eine empirische Studie zur sozialen Erreichbarkeit und zum individuellen Nutzen von Lernprozessen. Leykam. [Google Scholar]
  26. Egger, R. (2008). Biografie und Lebenswelt. Möglichkeiten und Grenzen der Biografie- und Lebensweltorientierung in der sozialen Arbeit. In J. Bakic, M. Diebäcker, & E. Hammer (Eds.), Aktuelle Leitbegriffe der sozialen Arbeit. Ein kritisches Handbuch (pp. 40–55). Löcker. [Google Scholar]
  27. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. [Google Scholar] [CrossRef]
  28. European Parliament & Council. (2024). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)—Final draft. Available online: https://artificialintelligenceact.eu/wp-content/uploads/2024/01/AIA-Final-Draft-21-January-2024.pdf (accessed on 17 September 2025).
  29. Eynon, R., & Young, E. (2020). Methodology, legend, and rhetoric: The constructions of AI by academia, industry, and policy groups for lifelong learning. Science, Technology, and Human Values, 46(1), 166–191. [Google Scholar] [CrossRef]
  30. Feenberg, A. (1999). Questioning technology. Routledge. [Google Scholar]
  31. Feyerabend, P. (1975). Against method: Outline of an anarchistic theory of knowledge. New Left Books. Available online: https://hdl.handle.net/11299/184649 (accessed on 5 September 2025).
  32. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. C., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication, Data Science Review, 2(1). [Google Scholar] [CrossRef]
  33. Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 1–14. [Google Scholar] [CrossRef]
  34. Foucault, M. (1979). Überwachen und Strafen: Die Geburt des Gefängnisses. Suhrkamp. [Google Scholar]
  35. Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Ratio, 22(3), 369–373. [Google Scholar] [CrossRef]
  36. Habermas, J. (2022). Theorie des kommunikativen Handelns. Band 1: Handlungsrationalität und gesellschaftliche Rationalisierung (12th ed.). Suhrkamp. [Google Scholar]
  37. Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575–599. [Google Scholar] [CrossRef]
  38. Harding, S. (1991). Whose science? Whose knowledge? Thinking from women’s lives. Cornell University Press. Available online: https://www.jstor.org/stable/10.7591/j.ctt1hhfnmg (accessed on 24 September 2025).
  39. Heidegger, M. (1976). Brief über den „Humanismus”. In F. W. von Hermann (Ed.), Gesamtausgabe (Vol. 9, pp. 365–383). Vittorio Klostermann. (Original work published 1946). [Google Scholar]
  40. Heidegger, M. (2005). Das Ge-Stell. In P. Jaeger (Ed.), Gesamtausgabe (Vol. 79, pp. 24–45). Vittorio Klostermann. (Original work published 1949). [Google Scholar]
  41. Held, V. (2006). The ethics of care: Personal, political, global. Oxford University Press. [Google Scholar]
  42. Honneth, A. (1994). Kampf um Anerkennung: Zur moralischen Grammatik sozialer Konflikte. Suhrkamp. (Original work published 1992). [Google Scholar]
  43. Hummel, S., & Donner, M.-T. (2023). KI-Anwendungen in der Hochschulbildung aus Studierendenperspektive. FNMA Magazin, 3, 38–41. [Google Scholar] [CrossRef]
  44. Hummel, S., Donner, M.-T., Abbas, S. H., & Wadhwa, G. (2025a). Bildungstechnologie-Design von KI-gestützten Avataren zur Förderung selbstregulierten Lernens. In T. Köhler, E. Schopp, N. Kahnwald, & R. Sonntag (Eds.), Community in new media. Trust in crisis: Communication models in digital communities: Proceedings of 27th conference GeNeMe. TUDpress. [Google Scholar]
  45. Hummel, S., Donner, M.-T., & Egger, R. (in press). Turning tides in higher education? Exploring roles and didactic functions of the VISION AI mentor. In H.-W. Wollersheim, T. Köhler, & N. Pinkwart (Eds.), Scalable mentoring in higher education. Technological approaches, teaching patterns and AI techniques. Springer VS.
  46. Hummel, S., Donner, M.-T., Wadhwa, G., & Abbas, S. H. (2025b). Competency assessment in higher education through the lens of artificial intelligence: A systematic review. International Journal of Artificial Intelligence in Education. in press. [Google Scholar]
  47. Hummel, S., Wadhwa, G., Abbas, S. H., & Donner, M.-T. (2025c). AI-enhanced personalized learning in higher education: Tracing a path to tailored support. In T. Köhler, E. Schopp, N. Kahnwald, & R. Sonntag (Eds.), Community in new media. Trust in crisis: Communication models in digital communities: Proceedings of 27th conference GeNeMe. TUDpress. [Google Scholar]
  48. Hutchins, E. (1995). Cognition in the wild. MIT Press. [Google Scholar] [CrossRef]
  49. Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University Press. [Google Scholar]
  50. Jonas, H. (1979). Das Prinzip Verantwortung: Versuch einer Ethik für die technologische Zivilisation. Suhrkamp. [Google Scholar]
  51. Jonas, H. (1984). Technik, Medizin und Ethik: Zur Praxis des Prinzips Verantwortung. Suhrkamp. [Google Scholar]
  52. Jongepier, F., & Keymolen, E. (2022). Explanation and agency: Exploring the normative-epistemic landscape of the “Right to Explanation”. Ethics Inf Technol, 24, 49. [Google Scholar] [CrossRef]
  53. Jörissen, B., & Marotzki, W. (2009). Medienbildung—Eine Einführung: Theorie—Methoden—Analysen (Uni-Taschenbücher Nr. 3189). UTB.
  54. Kant, I. (1998). Grundlegung zur Metaphysik der Sitten. Suhrkamp. (Original work published 1785). [Google Scholar]
  55. Klafki, W. (1996). Neue Studien zur Bildungstheorie und Didaktik: Zeitgemäße Allgemeinbildung und kritisch-konstruktive Didaktik (7th ed.). Beltz. [Google Scholar]
  56. Knox, J., Williamson, B., & Bayne, S. (2020). Machine behaviourism: Future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies. Learning, Media and Technology, 45(1), 31–45. [Google Scholar] [CrossRef]
  57. Köhler, T. (2003). Das Selbst im Netz. Die Konstruktion sozialer Identität in der computervermittelten Kommunikation. VS Verlag für Sozialwissenschaften Wiesbaden. [Google Scholar] [CrossRef]
  58. Kukutai, T., & Taylor, J. (2016). Indigenous data sovereignty: Toward an agenda. ANU Press. [Google Scholar]
  59. Latour, B. (1993). We have never been modern (C. Porter, Trans.). Harvard University Press. [Google Scholar]
  60. Luckin, R. (2018). Machine learning and human intelligence: The future of education for the 21st century. UCL Press. [Google Scholar]
  61. MacIntyre, A. (2007). After virtue (3rd ed.). Duckworth. [Google Scholar]
  62. Mackenzie, C., & Stoljar, N. (Eds.). (2000). Relational autonomy: Feminist perspectives on autonomy, agency, and the social self. Oxford University Press. [Google Scholar]
  63. Mbiti, J. S. (1969). African religions and philosophy (2nd ed.). Heinemann Publishers. [Google Scholar]
  64. Medina, J. (2013). The epistemology of resistance: Gender and racial oppression, epistemic injustice, and resistant imaginations. Oxford University Press. [Google Scholar]
  65. Mill, J. S. (1987). Utilitarianism. In J. Gray (Ed.), The essential works of John Stuart Mill (pp. 272–338). Oxford University Press. (Original work published 1863). [Google Scholar]
  66. Mill, J. S. (2011). On liberty. Cambridge University Press. [Google Scholar]
  67. Nussbaum, M. C. (2000). Women and human development: The capabilities approach. Cambridge University Press. [Google Scholar] [CrossRef]
  68. Pauer-Studer, H. (2023). Vertragstheoretische Ethik. In C. Neuhäuser, J. Metzinger, A. Stadler, & A. Wagner (Eds.), Handbuch angewandte ethik (pp. 51–57). Springer. [Google Scholar] [CrossRef]
  69. Rawls, J. (1971). A theory of justice. Harvard University Press. [Google Scholar] [CrossRef]
  70. Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26(2), 582–599. [Google Scholar] [CrossRef]
  71. Rose, N. (2015). Powers of freedom: Reframing political thought. Cambridge University Press. [Google Scholar]
  72. Rosenberger, R., & Verbeek, P.-P. (Eds.). (2015). Postphenomenological investigations: Essays on human–technology relations (pp. 61–97). Lexington Books. [Google Scholar]
  73. Samek, W., Wiegand, T., & Müller, K. R. (2019). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. Digital Signal Processing, 93, 101–110. [Google Scholar] [CrossRef]
  74. Scanlon, T. M. (1998). What we owe to each other. Harvard University Press. [Google Scholar]
  75. Schlicht, T. (2017). Kant and the problem of consciousness. In S. Leach, & J. Tartaglia (Eds.), Consciousness and the great philosophers (2nd ed.). Routledge. [Google Scholar]
  76. Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press. [Google Scholar]
  77. Sen, A. (1999). Development as freedom. Oxford University Press. [Google Scholar]
  78. Simondon, G. (2020). Individuation in light of notions of form and information. University of Minnesota Press. [Google Scholar]
  79. Spiekermann-Hoff, S. (2015). Ethical IT innovation: A value-based system design approach (1st ed.). Auerbach Publications. [Google Scholar] [CrossRef]
  80. Spivak, G. C. (1988). Can the subaltern speak? In C. Nelson, & L. Grossberg (Eds.), Marxism and the interpretation of culture (pp. 271–313). University of Illinois Press. [Google Scholar]
  81. Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press. [Google Scholar]
  82. Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. University of Chicago Press. [Google Scholar]
  83. West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute. [Google Scholar]
  84. Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235. [Google Scholar] [CrossRef]
  85. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education: The state of the art and future research directions. International Journal of Educational Technology in Higher Education, 16(1), 39. [Google Scholar] [CrossRef]
  86. Zhang, K., & Aslan, A. B. (2021). AI technologies for education: Recent research and future directions. Computers and Education: Artificial Intelligence, 2, 100025. [Google Scholar] [CrossRef]
Table 1. Comparison of Ethical AI and Responsible AI in HE.
Table 1. Comparison of Ethical AI and Responsible AI in HE.
DimensionEthical AI
(Principles)
Responsible AI
(Process)
Normative Tension (Towards
Situated Ethics)
Example in HE
Philosophical foundationDeontology (Kant, 1785/1998), Utilitarianism (Bentham, 1996; Mill, 1863/1987): justice, autonomy, principled evaluationEthics of responsibility (Jonas, 1984), Contract theory (Rawls, 1971), Capability approach (Nussbaum, 2000; Sen, 1999): democratic legitimacyUniversal principles vs. contextual negotiationUse of AI for admissions scoring justified by fairness metrics vs. participatory review boards negotiating admission criteria
Core
question
“Is this action morally right?”“Who decides, under what conditions, and with what consequences?”Moral rightness vs. institutional accountabilityAutomated plagiarism detection flagged as “unethical” vs. student panels questioning legitimacy and due process
OrientationNormative clarity, principle-based evaluationProcedural inclusion, participatory governanceAbstract norms vs. lived proceduresLearner profiling optimized for “success outcomes” vs. co-designed learning pathways negotiated with faculty and students
StrengthsPhilosophical precision, definitional clarity, stable critiqueContext sensitivity, inclusion, attention to institutional dynamicsClarity vs. contextual responsivenessEthical AI flags biased grading outputs; Responsible AI convenes stakeholder review across departments
Weak-
nesses
Risk of abstraction, low sensitivity to power and diversityRisk of tokenism, governance without substancePrinciple-blindness vs. process-blindnessAppeals to academic integrity principles without considering student voice vs. inclusive procedures that still reproduce status hierarchies
Applications in HEEvaluating manipulation, autonomy, epistemic justice in algorithmic gradingGovernance of LMS platforms, student involvement in AI tool evaluationIndividual rights vs. institutional processesBias audits for AI feedback tools vs. AI governance committees with student representation
Theoretical extensionsPhilosophy of technology, post-phenomenology (Ihde, 1990; Verbeek, 2011), moral mediationFeminist epistemology, decolonial critique, data colonialism (Couldry & Mejias, 2019; Harding, 1991; Medina, 2013)Ideal critique vs. critical contextualizationQuestioning the normative rationality of AI mentoring vs. interrogating data extractivism in HE platforms
Educational implicationBildung as autonomy and moral developmentEducation as democratic participation and value negotiationSubject formation vs. democratic collectivityAI used to evaluate ‘autonomous learning’ vs. AI used as a forum for collective curriculum shaping
Meta-levelEthics as assessment against idealsEthics as institutional responsibility, procedural justice, recognitionEthical principles vs. political practiceAcademic AI guidelines referencing moral values vs. negotiation of those values through AI policy hearings in HE
Table 2. Comparative Schema: Ethical Traditions as Partial Normative Optics.
Table 2. Comparative Schema: Ethical Traditions as Partial Normative Optics.
Ethical TraditionNormative StrengthConceptual Limitation
Deontological ethics (Kant, 1785/1998)Makes autonomy, duty and non-instrumental respect for learners visibleOverlooks how structural conditions and unequal agency restrict the possibility of autonomy in practice
Utilitarian ethics (Bentham, 1996; Mill, 1863/1987)Illuminates questions of efficiency, optimization and outcome-based fairnessTends to legitimize majoritarian logics and diminishes attention to marginal epistemic positions
Responsibility ethics (Jonas, 1984)Brings futurity, precaution and long-term ethical responsibility in technological design into focusProvides little guidance for immediate participatory legitimacy and everyday institutional decision-making
Contract theory (Rawls, 1971)Frames justice as fairness through inclusion and deliberation under conditions of equalityAssumes ideal deliberative symmetry that rarely exists in stratified HE environments
Capability approach (Nussbaum, 2000; Sen, 1999)Highlights agency, real freedoms and the enabling conditions required for meaningful participation in educationIs less explicit about how governance structures and accountability mechanisms should be operationalised
Table 3. Algorithmic Grading as a Site of Situated Ethics.
Table 3. Algorithmic Grading as a Site of Situated Ethics.
Analytical DimensionEthical AIResponsible AISituated Ethics
Evaluative logicTransparent and unbiased scoring based on predefined criteriaInclusion of stakeholders in defining and revising scoring parametersLearners can question scoring assumptions and assert epistemic authority
Power configurationSystem evaluates the learnerGovernance structures mediate evaluation processesEvaluation becomes a negotiated space where learners act as epistemic co-agents
Normative shiftFair outcomesLegitimate proceduresRecognition and contestability reshape grading as epistemic negotiation
Table 4. Admissions as Epistemic and Institutional Negotiation.
Table 4. Admissions as Epistemic and Institutional Negotiation.
Analytical DimensionEthical AIResponsible AISituated Ethics
Evaluative logicMerit-based fairness through consistent scoringTransparent governance of admission criteria with stakeholder inclusionApplicants can question data representations and trigger institutional adaptation
Power configurationSystem evaluates applicants according to fixed principlesInstitutional bodies oversee decision pipelines with procedural accountabilityAdmissions becomes a revisable interpretive space shaped by applicant feedback
Normative shiftFair distribution of opportunityInclusion in governance of evaluative systemsReflexive negotiation of what counts as academic potential
Table 5. Personalization as Epistemic Co-Agency.
Table 5. Personalization as Epistemic Co-Agency.
Analytical DimensionEthical AIResponsible AISituated Ethics
Evaluative logicRespect individual freedom in personalization pathwaysEnable oversight and modifiability of adaptive rulesAllow learners to express and justify learning choices as epistemic agents
Power configurationSystem presents optimized pathsGovernance bodies review adaptive logic and feedbackLearners reinterpret and negotiate personalization logic
Normative shiftProtect from manipulationMonitor adaptive processes through institutional procedurePersonalization becomes dialogic co-agency grounded in recognition
Table 6. Translating Ethical and Responsible AI into Educational Practice through Situated Ethics.
Table 6. Translating Ethical and Responsible AI into Educational Practice through Situated Ethics.
DimensionEthical AI (Principles)Responsible AI (Process)Situated Ethics (Integrated)
AutonomySafeguard learners’ capacity for reflective self-determination; avoid manipulation; ensure transparency of evaluative pathwaysInvolve students and educators in defining personalization goals; make adaptation negotiable rather than fixedCreate dialogic infrastructures in which learners can question and reframe adaptive logics as part of epistemic co-agency
FairnessEvaluate systems for bias and distributive justice; apply principled criteria for equal treatmentEmbed participatory procedures to define fairness standards; ensure plural representation in governanceCombine technical bias audits with recognition-oriented pedagogy, ensuring that epistemic difference becomes visible and legitimate
Democratic participationAnchor evaluation in principles of justice, equality and dignityDevelop governance mechanisms such as feedback loops, stakeholder councils and participatory designFoster institutional cultures where AI remains open to contestation and reinterpretation, supporting civic as well as pedagogical agency
Epistemic justiceQuestion whose knowledge is legitimized and whose voices are silenced within classification schemesEstablish accountability routines to detect testimonial and hermeneutic injusticeAlign AI with practices of recognition that affirm learners as credible knowers and co-constructors of meaning
Evaluation and accountabilityApply normative frameworks (deontology, utilitarianism, capability approach) to system assessmentInstitutionalize continuous monitoring and multi-stakeholder oversightPractice methodological pluralism by combining audits, pedagogical reflection and learner-centered epistemic inquiry
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hummel, S. Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning. Educ. Sci. 2025, 15, 1594. https://doi.org/10.3390/educsci15121594

AMA Style

Hummel S. Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning. Education Sciences. 2025; 15(12):1594. https://doi.org/10.3390/educsci15121594

Chicago/Turabian Style

Hummel, Sandra. 2025. "Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning" Education Sciences 15, no. 12: 1594. https://doi.org/10.3390/educsci15121594

APA Style

Hummel, S. (2025). Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning. Education Sciences, 15(12), 1594. https://doi.org/10.3390/educsci15121594

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop