Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning
Abstract
1. Introduction
- It offers a conceptual map that contrasts Ethical AI and Responsible AI as normative architectures.
- It develops an evaluative criteria set grounded in situated ethics, centered on recognition, contestability, reflective autonomy, and institutional responsiveness.
- It proposes a practice-oriented diagnostic framework that uses these criteria to assess AI systems in educational contexts and aligns this work with the tabular tools presented later.
2. Mapping AI Ethics: Philosophical Grounds and Educational Implications
2.1. Ethical AI: Moral Rightness and Normative Clarity
2.2. Responsible AI: Process, Pluralism, and Situated Judgments
2.3. Tensions and Transformative Potentials Between Ethical and Responsible AI
3. Ethical Tensions in Educational AI: Personalization, Fairness, and Epistemic Justice
3.1. Personalization and Ethical Conditions of Autonomy
3.2. The Ethics of Fairness in Educational AI: Normative Conflicts and Epistemic Injustice
3.3. Epistemic Agency and Democratic Subject Formation
4. Reframing AI as a Constitutive Pedagogical Actor in Democratic Education
4.1. From Procedural Ethics to Formative Intentionality: Reframing AI’s Role in Educational Subjectivation
4.2. AI and the Architecture of Epistemic Access
4.3. Democratic Participation and the Ethics of Co-Agency
4.4. Generative Ambiguity: Toward a Situated Ethics of Educational AI Systems
5. Implications for Practice
5.1. Microcase 1—Algorithmic Grading
5.2. Microcase 2—AI-Mediated Admissions Systems
5.3. Microcase 3—Adaptive Mentoring Systems
6. Conclusions: Reframing Ethical AI as Situated Educational Practice
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| HE | Higher Education |
| 1 | While recent analyses have focused on the normative foundations
of explainability and justification in machine learning systems (Jongepier & Keymolen, 2022), this article
extends the discussion to the educational domain, where AI functions not merely
as a technical artefact but as a co-constitutive epistemic agent in shaping
learners’ access to recognition and knowledge. |
| 2 | Roll and Wylie (2016) trace the
historical evolution of AI in education, noting a growing tension between
innovation-driven development and pedagogical intentionality. |
| 3 | Scanlon (1998) emphasizes that
moral justification to others lies at the heart of ethical reasoning—a view
that aligns closely with participatory approaches to AI design. Building on
this, Pauer-Studer (2023) stresses the
importance of procedural legitimacy and justificatory reciprocity as normative
foundations for decision architectures. In the context of educational AI, this
resonates with Dignum’s account of responsible autonomy, which highlights the
ethical complexity that arises when algorithmic decisions intersect with human
accountability and frames AI ethics as a plural field in which competing
traditions of normative reasoning co-exist, often with implicit tensions (Dignum, 2018). |
| 4 | A meta-inventory of human values, such as that proposed by Cheng and Fleischmann (2010), can support this
analysis by surfacing implicit ethical assumptions in design. |
| 5 | This aligns with media-educational perspectives that frame
Bildung as a dynamic process of identity and subject formation in
technologically mediated environments (Jörissen &
Marotzki, 2009). |
| 6 | |
| 7 | West et al. (2019) show how such
algorithmic systems often reinforce gendered and racialized power structures
even in ostensibly neutral domains like education. |
| 8 | Efforts in explainable AI, such as visualization techniques for
deep learning models, aim to mitigate this epistemic opacity but often fall
short of addressing deeper interpretive concerns (Samek
et al., 2019). |
| 9 | |
| 10 | This resonates with Spiekermann-Hoff’s (2015) call
for value-based system design, which links technical architectures to broader
ethical commitments. |
| 11 | While
much of the AI literature refers to “intelligent tutoring systems”, the term
“mentoring” is used here in line with the relational and recognition-oriented
account of pedagogical accompaniment developed in Donner and Hummel (in press),
where mentoring is conceived not as adaptive instruction but as a formative
space of co-agency and epistemic emergence. |
References
- Akama, Y., Light, A., & Kamihira, T. (2020). Expanding participation to design with more-than-human concerns. In Proceedings of the 16th participatory design conference 2020—Participation(s) otherwise (Vol. 1, pp. 1–11). Association for Computing Machinery. [Google Scholar] [CrossRef]
- Amoore, L. (2020). Cloud ethics: Algorithms and the attributes of ourselves and others. Duke University Press. [Google Scholar] [CrossRef]
- Arendt, H. (1958). The human condition. University of Chicago Press. [Google Scholar]
- Baker, R. S., & Hawn, A. (2021). Algorithmic bias in education. Computers & Education: Artificial Intelligence, 2(1), 100025. [Google Scholar] [CrossRef]
- Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press. [Google Scholar] [CrossRef]
- Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and machine learning: Limitations and opportunities. MIT Press. Available online: https://fairmlbook.org/pdf/fairmlbook.pdf (accessed on 30 October 2025).
- Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. [Google Scholar] [CrossRef]
- Bartok, L., Donner, M.-T., Ebner, M., Gosch, N., Handle-Pfeiffer, D., Hummel, S., Kriegler-Kastelic, G., Leitner, P., Tang, T., Veljanova, H., Winter, C., & Zwiauer, C. (2023). Learning analytics—Studierende im fokus. Zeitschrift für Hochschulentwicklung: ZFHE; Beiträge zu Studium, Wissenschaft und Beruf, 18, 223–250. [Google Scholar] [CrossRef]
- Benjamins, R., Barbado, A., & Sierra, D. (2019). Responsible AI by design in practice. arXiv, arXiv:1909.12838. Available online: https://arxiv.org/abs/1909.12838 (accessed on 30 October 2025).
- Benossi, L., & Bernecker, S. (2022). A Kantian perspective on robot ethics. In H. Kim, & D. Schönecker (Eds.), Kant and artificial intelligence (pp. 147–168). De Gruyter. [Google Scholar]
- Bentham, J. (1996). The collected works of Jeremy Bentham: An introduction to the principles of morals and legislation. Clarendon Press. [Google Scholar]
- Biesta, G. (2006). Beyond learning: Democratic education for a human future. Paradigm Publishers. [Google Scholar]
- Binns, R. (2018, February 23–24). Fairness in machine learning: Lessons from political philosophy. 2018 Conference on Fairness, Accountability, and Transparency (FAT) (pp. 149–159), New York, NY, USA. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3086546 (accessed on 27 September 2025).
- Bulfin, S., Johnson, N. F., & Bigum, C. (2015). Critical is something others (don’t) do: Mapping the imaginative of educational technology. In S. Bulfin, N. F. Johnson, & C. Bigum (Eds.), Critical perspectives on technology and education (pp. 1–16). Palgrave Macmillan. [Google Scholar] [CrossRef]
- Cheng, A., & Fleischmann, K. R. (2010). Developing a meta-inventory of human values. Proceedings of the American Society for Information Science and Technology, 47, 1–10. [Google Scholar] [CrossRef]
- Clarke, R. (2019). Principles and business processes for responsible AI. Computer Law & Security Review, 35(4), 410–422. [Google Scholar] [CrossRef]
- Code, L., Harding, S., & Hekman, S. (1993). What can she know? Feminist theory and the construction of knowledge. Hypatia, 8(3), 202–210. [Google Scholar]
- Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press. [Google Scholar] [CrossRef]
- Daly, A., Hagendorff, T., Hui, L., Mann, M., Marda, V., Wagner, B., & Wang, W. W. (2021). AI, governance and ethics: Constitutional challenges in the algorithmic society. Cambridge University Press. [Google Scholar] [CrossRef]
- Dewey, J. (1916). Democracy and education: An introduction to the philosophy of education. Macmillan. [Google Scholar]
- Dierksmeier, C. (2022). Partners, not parts. Enhanced autonomy through artificial intelligence? A Kantian perspective. In H. Kim, & D. Schönecker (Eds.), Kant and artificial intelligence (pp. 239–256). De Gruyter. [Google Scholar]
- Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the special issue. Ethics and Information Technology, 20(1), 1–3. [Google Scholar] [CrossRef]
- Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer. [Google Scholar] [CrossRef]
- Donner, M.-T., & Hummel, S. (in press). Systematic literature review of AI-mediated mentoring in higher education. In H.-W. Wollersheim, T. Köhler, & N. Pinkwart (Eds.), Scalable mentoring in higher education. Technological approaches, teaching patterns and AI techniques. Springer VS.
- Egger, R. (2006). Gesellschaft mit beschränkter Bildung. Eine empirische Studie zur sozialen Erreichbarkeit und zum individuellen Nutzen von Lernprozessen. Leykam. [Google Scholar]
- Egger, R. (2008). Biografie und Lebenswelt. Möglichkeiten und Grenzen der Biografie- und Lebensweltorientierung in der sozialen Arbeit. In J. Bakic, M. Diebäcker, & E. Hammer (Eds.), Aktuelle Leitbegriffe der sozialen Arbeit. Ein kritisches Handbuch (pp. 40–55). Löcker. [Google Scholar]
- Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. [Google Scholar] [CrossRef]
- European Parliament & Council. (2024). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)—Final draft. Available online: https://artificialintelligenceact.eu/wp-content/uploads/2024/01/AIA-Final-Draft-21-January-2024.pdf (accessed on 17 September 2025).
- Eynon, R., & Young, E. (2020). Methodology, legend, and rhetoric: The constructions of AI by academia, industry, and policy groups for lifelong learning. Science, Technology, and Human Values, 46(1), 166–191. [Google Scholar] [CrossRef]
- Feenberg, A. (1999). Questioning technology. Routledge. [Google Scholar]
- Feyerabend, P. (1975). Against method: Outline of an anarchistic theory of knowledge. New Left Books. Available online: https://hdl.handle.net/11299/184649 (accessed on 5 September 2025).
- Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. C., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication, Data Science Review, 2(1). [Google Scholar] [CrossRef]
- Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 1–14. [Google Scholar] [CrossRef]
- Foucault, M. (1979). Überwachen und Strafen: Die Geburt des Gefängnisses. Suhrkamp. [Google Scholar]
- Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Ratio, 22(3), 369–373. [Google Scholar] [CrossRef]
- Habermas, J. (2022). Theorie des kommunikativen Handelns. Band 1: Handlungsrationalität und gesellschaftliche Rationalisierung (12th ed.). Suhrkamp. [Google Scholar]
- Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575–599. [Google Scholar] [CrossRef]
- Harding, S. (1991). Whose science? Whose knowledge? Thinking from women’s lives. Cornell University Press. Available online: https://www.jstor.org/stable/10.7591/j.ctt1hhfnmg (accessed on 24 September 2025).
- Heidegger, M. (1976). Brief über den „Humanismus”. In F. W. von Hermann (Ed.), Gesamtausgabe (Vol. 9, pp. 365–383). Vittorio Klostermann. (Original work published 1946). [Google Scholar]
- Heidegger, M. (2005). Das Ge-Stell. In P. Jaeger (Ed.), Gesamtausgabe (Vol. 79, pp. 24–45). Vittorio Klostermann. (Original work published 1949). [Google Scholar]
- Held, V. (2006). The ethics of care: Personal, political, global. Oxford University Press. [Google Scholar]
- Honneth, A. (1994). Kampf um Anerkennung: Zur moralischen Grammatik sozialer Konflikte. Suhrkamp. (Original work published 1992). [Google Scholar]
- Hummel, S., & Donner, M.-T. (2023). KI-Anwendungen in der Hochschulbildung aus Studierendenperspektive. FNMA Magazin, 3, 38–41. [Google Scholar] [CrossRef]
- Hummel, S., Donner, M.-T., Abbas, S. H., & Wadhwa, G. (2025a). Bildungstechnologie-Design von KI-gestützten Avataren zur Förderung selbstregulierten Lernens. In T. Köhler, E. Schopp, N. Kahnwald, & R. Sonntag (Eds.), Community in new media. Trust in crisis: Communication models in digital communities: Proceedings of 27th conference GeNeMe. TUDpress. [Google Scholar]
- Hummel, S., Donner, M.-T., & Egger, R. (in press). Turning tides in higher education? Exploring roles and didactic functions of the VISION AI mentor. In H.-W. Wollersheim, T. Köhler, & N. Pinkwart (Eds.), Scalable mentoring in higher education. Technological approaches, teaching patterns and AI techniques. Springer VS.
- Hummel, S., Donner, M.-T., Wadhwa, G., & Abbas, S. H. (2025b). Competency assessment in higher education through the lens of artificial intelligence: A systematic review. International Journal of Artificial Intelligence in Education. in press. [Google Scholar]
- Hummel, S., Wadhwa, G., Abbas, S. H., & Donner, M.-T. (2025c). AI-enhanced personalized learning in higher education: Tracing a path to tailored support. In T. Köhler, E. Schopp, N. Kahnwald, & R. Sonntag (Eds.), Community in new media. Trust in crisis: Communication models in digital communities: Proceedings of 27th conference GeNeMe. TUDpress. [Google Scholar]
- Hutchins, E. (1995). Cognition in the wild. MIT Press. [Google Scholar] [CrossRef]
- Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University Press. [Google Scholar]
- Jonas, H. (1979). Das Prinzip Verantwortung: Versuch einer Ethik für die technologische Zivilisation. Suhrkamp. [Google Scholar]
- Jonas, H. (1984). Technik, Medizin und Ethik: Zur Praxis des Prinzips Verantwortung. Suhrkamp. [Google Scholar]
- Jongepier, F., & Keymolen, E. (2022). Explanation and agency: Exploring the normative-epistemic landscape of the “Right to Explanation”. Ethics Inf Technol, 24, 49. [Google Scholar] [CrossRef]
- Jörissen, B., & Marotzki, W. (2009). Medienbildung—Eine Einführung: Theorie—Methoden—Analysen (Uni-Taschenbücher Nr. 3189). UTB.
- Kant, I. (1998). Grundlegung zur Metaphysik der Sitten. Suhrkamp. (Original work published 1785). [Google Scholar]
- Klafki, W. (1996). Neue Studien zur Bildungstheorie und Didaktik: Zeitgemäße Allgemeinbildung und kritisch-konstruktive Didaktik (7th ed.). Beltz. [Google Scholar]
- Knox, J., Williamson, B., & Bayne, S. (2020). Machine behaviourism: Future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies. Learning, Media and Technology, 45(1), 31–45. [Google Scholar] [CrossRef]
- Köhler, T. (2003). Das Selbst im Netz. Die Konstruktion sozialer Identität in der computervermittelten Kommunikation. VS Verlag für Sozialwissenschaften Wiesbaden. [Google Scholar] [CrossRef]
- Kukutai, T., & Taylor, J. (2016). Indigenous data sovereignty: Toward an agenda. ANU Press. [Google Scholar]
- Latour, B. (1993). We have never been modern (C. Porter, Trans.). Harvard University Press. [Google Scholar]
- Luckin, R. (2018). Machine learning and human intelligence: The future of education for the 21st century. UCL Press. [Google Scholar]
- MacIntyre, A. (2007). After virtue (3rd ed.). Duckworth. [Google Scholar]
- Mackenzie, C., & Stoljar, N. (Eds.). (2000). Relational autonomy: Feminist perspectives on autonomy, agency, and the social self. Oxford University Press. [Google Scholar]
- Mbiti, J. S. (1969). African religions and philosophy (2nd ed.). Heinemann Publishers. [Google Scholar]
- Medina, J. (2013). The epistemology of resistance: Gender and racial oppression, epistemic injustice, and resistant imaginations. Oxford University Press. [Google Scholar]
- Mill, J. S. (1987). Utilitarianism. In J. Gray (Ed.), The essential works of John Stuart Mill (pp. 272–338). Oxford University Press. (Original work published 1863). [Google Scholar]
- Mill, J. S. (2011). On liberty. Cambridge University Press. [Google Scholar]
- Nussbaum, M. C. (2000). Women and human development: The capabilities approach. Cambridge University Press. [Google Scholar] [CrossRef]
- Pauer-Studer, H. (2023). Vertragstheoretische Ethik. In C. Neuhäuser, J. Metzinger, A. Stadler, & A. Wagner (Eds.), Handbuch angewandte ethik (pp. 51–57). Springer. [Google Scholar] [CrossRef]
- Rawls, J. (1971). A theory of justice. Harvard University Press. [Google Scholar] [CrossRef]
- Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26(2), 582–599. [Google Scholar] [CrossRef]
- Rose, N. (2015). Powers of freedom: Reframing political thought. Cambridge University Press. [Google Scholar]
- Rosenberger, R., & Verbeek, P.-P. (Eds.). (2015). Postphenomenological investigations: Essays on human–technology relations (pp. 61–97). Lexington Books. [Google Scholar]
- Samek, W., Wiegand, T., & Müller, K. R. (2019). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. Digital Signal Processing, 93, 101–110. [Google Scholar] [CrossRef]
- Scanlon, T. M. (1998). What we owe to each other. Harvard University Press. [Google Scholar]
- Schlicht, T. (2017). Kant and the problem of consciousness. In S. Leach, & J. Tartaglia (Eds.), Consciousness and the great philosophers (2nd ed.). Routledge. [Google Scholar]
- Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press. [Google Scholar]
- Sen, A. (1999). Development as freedom. Oxford University Press. [Google Scholar]
- Simondon, G. (2020). Individuation in light of notions of form and information. University of Minnesota Press. [Google Scholar]
- Spiekermann-Hoff, S. (2015). Ethical IT innovation: A value-based system design approach (1st ed.). Auerbach Publications. [Google Scholar] [CrossRef]
- Spivak, G. C. (1988). Can the subaltern speak? In C. Nelson, & L. Grossberg (Eds.), Marxism and the interpretation of culture (pp. 271–313). University of Illinois Press. [Google Scholar]
- Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press. [Google Scholar]
- Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. University of Chicago Press. [Google Scholar]
- West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute. [Google Scholar]
- Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235. [Google Scholar] [CrossRef]
- Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education: The state of the art and future research directions. International Journal of Educational Technology in Higher Education, 16(1), 39. [Google Scholar] [CrossRef]
- Zhang, K., & Aslan, A. B. (2021). AI technologies for education: Recent research and future directions. Computers and Education: Artificial Intelligence, 2, 100025. [Google Scholar] [CrossRef]
| Dimension | Ethical AI (Principles) | Responsible AI (Process) | Normative Tension (Towards Situated Ethics) | Example in HE |
|---|---|---|---|---|
| Philosophical foundation | Deontology (Kant, 1785/1998), Utilitarianism (Bentham, 1996; Mill, 1863/1987): justice, autonomy, principled evaluation | Ethics of responsibility (Jonas, 1984), Contract theory (Rawls, 1971), Capability approach (Nussbaum, 2000; Sen, 1999): democratic legitimacy | Universal principles vs. contextual negotiation | Use of AI for admissions scoring justified by fairness metrics vs. participatory review boards negotiating admission criteria |
| Core question | “Is this action morally right?” | “Who decides, under what conditions, and with what consequences?” | Moral rightness vs. institutional accountability | Automated plagiarism detection flagged as “unethical” vs. student panels questioning legitimacy and due process |
| Orientation | Normative clarity, principle-based evaluation | Procedural inclusion, participatory governance | Abstract norms vs. lived procedures | Learner profiling optimized for “success outcomes” vs. co-designed learning pathways negotiated with faculty and students |
| Strengths | Philosophical precision, definitional clarity, stable critique | Context sensitivity, inclusion, attention to institutional dynamics | Clarity vs. contextual responsiveness | Ethical AI flags biased grading outputs; Responsible AI convenes stakeholder review across departments |
| Weak- nesses | Risk of abstraction, low sensitivity to power and diversity | Risk of tokenism, governance without substance | Principle-blindness vs. process-blindness | Appeals to academic integrity principles without considering student voice vs. inclusive procedures that still reproduce status hierarchies |
| Applications in HE | Evaluating manipulation, autonomy, epistemic justice in algorithmic grading | Governance of LMS platforms, student involvement in AI tool evaluation | Individual rights vs. institutional processes | Bias audits for AI feedback tools vs. AI governance committees with student representation |
| Theoretical extensions | Philosophy of technology, post-phenomenology (Ihde, 1990; Verbeek, 2011), moral mediation | Feminist epistemology, decolonial critique, data colonialism (Couldry & Mejias, 2019; Harding, 1991; Medina, 2013) | Ideal critique vs. critical contextualization | Questioning the normative rationality of AI mentoring vs. interrogating data extractivism in HE platforms |
| Educational implication | Bildung as autonomy and moral development | Education as democratic participation and value negotiation | Subject formation vs. democratic collectivity | AI used to evaluate ‘autonomous learning’ vs. AI used as a forum for collective curriculum shaping |
| Meta-level | Ethics as assessment against ideals | Ethics as institutional responsibility, procedural justice, recognition | Ethical principles vs. political practice | Academic AI guidelines referencing moral values vs. negotiation of those values through AI policy hearings in HE |
| Ethical Tradition | Normative Strength | Conceptual Limitation |
|---|---|---|
| Deontological ethics (Kant, 1785/1998) | Makes autonomy, duty and non-instrumental respect for learners visible | Overlooks how structural conditions and unequal agency restrict the possibility of autonomy in practice |
| Utilitarian ethics (Bentham, 1996; Mill, 1863/1987) | Illuminates questions of efficiency, optimization and outcome-based fairness | Tends to legitimize majoritarian logics and diminishes attention to marginal epistemic positions |
| Responsibility ethics (Jonas, 1984) | Brings futurity, precaution and long-term ethical responsibility in technological design into focus | Provides little guidance for immediate participatory legitimacy and everyday institutional decision-making |
| Contract theory (Rawls, 1971) | Frames justice as fairness through inclusion and deliberation under conditions of equality | Assumes ideal deliberative symmetry that rarely exists in stratified HE environments |
| Capability approach (Nussbaum, 2000; Sen, 1999) | Highlights agency, real freedoms and the enabling conditions required for meaningful participation in education | Is less explicit about how governance structures and accountability mechanisms should be operationalised |
| Analytical Dimension | Ethical AI | Responsible AI | Situated Ethics |
|---|---|---|---|
| Evaluative logic | Transparent and unbiased scoring based on predefined criteria | Inclusion of stakeholders in defining and revising scoring parameters | Learners can question scoring assumptions and assert epistemic authority |
| Power configuration | System evaluates the learner | Governance structures mediate evaluation processes | Evaluation becomes a negotiated space where learners act as epistemic co-agents |
| Normative shift | Fair outcomes | Legitimate procedures | Recognition and contestability reshape grading as epistemic negotiation |
| Analytical Dimension | Ethical AI | Responsible AI | Situated Ethics |
|---|---|---|---|
| Evaluative logic | Merit-based fairness through consistent scoring | Transparent governance of admission criteria with stakeholder inclusion | Applicants can question data representations and trigger institutional adaptation |
| Power configuration | System evaluates applicants according to fixed principles | Institutional bodies oversee decision pipelines with procedural accountability | Admissions becomes a revisable interpretive space shaped by applicant feedback |
| Normative shift | Fair distribution of opportunity | Inclusion in governance of evaluative systems | Reflexive negotiation of what counts as academic potential |
| Analytical Dimension | Ethical AI | Responsible AI | Situated Ethics |
|---|---|---|---|
| Evaluative logic | Respect individual freedom in personalization pathways | Enable oversight and modifiability of adaptive rules | Allow learners to express and justify learning choices as epistemic agents |
| Power configuration | System presents optimized paths | Governance bodies review adaptive logic and feedback | Learners reinterpret and negotiate personalization logic |
| Normative shift | Protect from manipulation | Monitor adaptive processes through institutional procedure | Personalization becomes dialogic co-agency grounded in recognition |
| Dimension | Ethical AI (Principles) | Responsible AI (Process) | Situated Ethics (Integrated) |
|---|---|---|---|
| Autonomy | Safeguard learners’ capacity for reflective self-determination; avoid manipulation; ensure transparency of evaluative pathways | Involve students and educators in defining personalization goals; make adaptation negotiable rather than fixed | Create dialogic infrastructures in which learners can question and reframe adaptive logics as part of epistemic co-agency |
| Fairness | Evaluate systems for bias and distributive justice; apply principled criteria for equal treatment | Embed participatory procedures to define fairness standards; ensure plural representation in governance | Combine technical bias audits with recognition-oriented pedagogy, ensuring that epistemic difference becomes visible and legitimate |
| Democratic participation | Anchor evaluation in principles of justice, equality and dignity | Develop governance mechanisms such as feedback loops, stakeholder councils and participatory design | Foster institutional cultures where AI remains open to contestation and reinterpretation, supporting civic as well as pedagogical agency |
| Epistemic justice | Question whose knowledge is legitimized and whose voices are silenced within classification schemes | Establish accountability routines to detect testimonial and hermeneutic injustice | Align AI with practices of recognition that affirm learners as credible knowers and co-constructors of meaning |
| Evaluation and accountability | Apply normative frameworks (deontology, utilitarianism, capability approach) to system assessment | Institutionalize continuous monitoring and multi-stakeholder oversight | Practice methodological pluralism by combining audits, pedagogical reflection and learner-centered epistemic inquiry |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hummel, S. Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning. Educ. Sci. 2025, 15, 1594. https://doi.org/10.3390/educsci15121594
Hummel S. Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning. Education Sciences. 2025; 15(12):1594. https://doi.org/10.3390/educsci15121594
Chicago/Turabian StyleHummel, Sandra. 2025. "Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning" Education Sciences 15, no. 12: 1594. https://doi.org/10.3390/educsci15121594
APA StyleHummel, S. (2025). Ethical and Responsible AI in Education: Situated Ethics for Democratic Learning. Education Sciences, 15(12), 1594. https://doi.org/10.3390/educsci15121594
