1. Introduction
Across the globe, educational systems are embracing digital transformation as part of a broader pursuit of sustainable development in education. International organisations assert that emerging technologies like artificial intelligence (AI) hold great promise for accelerating progress towards inclusive and quality education for all (United Nations Sustainable Development Goal 4). In line with such aspirations, the adoption of AI within educational institutions is frequently framed as an inevitable progression, promising enhanced instructional efficiency, administrative productivity, and personalised learning experiences [
1]. This mainstream discourse, however, often obscures the political–economic forces that shape EdTech markets and presents AI as if it were a value-neutral, context-free solution. These optimistic narratives portray AI as a neutral tool that universally benefits educational stakeholders. However, beneath this veneer of neutrality lies a profound transformation of educational labour, institutional governance, and the ethical foundations of the teaching profession, particularly in higher education.
Globally, AI technologies are making significant inroads into universities and schools, reshaping the roles and responsibilities of educators and administrators alike [
2]. For instance, recent trends show rapid uptake of AI: nearly half of surveyed K–12 teachers in the United States had received training to integrate AI into their work by late 2024 [
3]. Similar trends are evident in higher education internationally, driven by managerial approaches favouring data-driven oversight, performance indicators, and platform-based accountability systems [
2,
4]. In this landscape, AI ceases to function merely as an instructional aid—it becomes an instrument of algorithmic governance, fundamentally reshaping the nature of teaching work in ways reminiscent of platform-mediated labour regimes observed in other industries [
5]. Teaching is increasingly subjected to quantification, monitoring, and optimisation by AI-driven analytics, defined by data flows rather than by educators’ professional judgment or ethical discretion. Tasks traditionally central to teaching, such as curriculum design, grading, student interaction, and feedback, are being delegated to AI systems [
6,
7]. Consequently, educators face an erosion of professional autonomy, creativity, and relational capacity, raising critical ethical concerns for institutional governance and educational policy worldwide.
Importantly, this shift is not pedagogically neutral; it is propelled by economic imperatives and governance logics inherent in the burgeoning educational technology (EdTech) sector. Companies ranging from multinational publishers to venture-backed start-ups aggressively promote AI as a necessary innovation, yet their offerings often prioritise scalability and profit over pedagogical integrity, transparency, or faculty autonomy [
1,
8]. As Noble warns in his seminal critique ‘digital diploma mills’, commercial platforms have long sought to appropriate academic content and deskill educators for the sake of rent extraction [
9]. Recent analyses by Hall [
10] and Preston [
11] likewise show that AI extends this logic, deepening the commodification and surveillance of academic labour. An ethically grounded critique of AI in education must, therefore, extend beyond questions of efficacy, interrogating the commercial imperatives, opaque data practices, and power asymmetries inherent in current AI adoption strategies [
12]. In addition, the environmental sustainability of large-scale AI adoption, such as the significant energy and resource demands of AI infrastructures, has emerged as a concern [
13], underlining that truly sustainable digital education requires attention to social, ethical, political–economic, and environmental dimensions of technology use.
Although an extensive body of research examines the pedagogical benefits and policy aspects of AI in education [
2,
14], relatively less attention has been paid to AI’s impact on educators themselves, as ethical agents, professional knowledge workers, and relational practitioners in educational institutions. Teachers and academics increasingly find themselves constrained by algorithmic systems and predictive analytics, expected to conform to externally defined protocols rather than exercise professional autonomy or context-sensitive pedagogy [
15]. Komljenovic’s work on platform capitalism in higher education highlights how such data infrastructures reposition educators as data-generating assets within global EdTech markets [
16,
17]. This emerging reality raises urgent questions about AI’s implications not only for student outcomes but also for the well-being, identity, and agency of educators who are central to the sustainability of any educational innovation.
To critically examine these transformations and their broader implications for sustainable digital education, this article employs Karl Marx’s theory of alienation, adapted to the contemporary context of AI-driven education. Marx’s concept of alienation, originally formulated to critique industrial-era labour conditions, described workers’ estrangement from the products of their work, from the work process, from their “species-being” (creative and social identity), and from their fellow workers [
18]. Although rooted in an industrial context, alienation theory has been fruitfully extended to modern knowledge-intensive and caring professions, proving pertinent for analysing academic labour in the digital age [
5,
19]. Using this framework, the present study investigates how AI integration may alienate educators along four dimensions: (1) the products of academic labour (e.g., loss of ownership over curricular materials or student outputs), (2) the educational work process (e.g., teaching methods and pace dictated by algorithms), (3) professional identity and human essence (educators’ creative, moral, and professional self-concept, akin to Marx’s “species-being”), and (4) interpersonal relations (teacher–student and collegial relationships). Recognising contemporary critiques that digital capitalism introduces new forms of control, commodification, and algorithmic oversight [
5,
20], and building up on recent Marxian analyses of higher education labour [
9,
10,
11], we argue that unchecked AI deployment in education risks exacerbating these alienating forces. We further contend that the sustainable and ethical integration of AI in education demands careful consideration, democratic oversight, and participatory governance models that involve educators in decision-making.
Ultimately, this paper positions AI-driven educational transformation not merely as a technical innovation but as a significant reconfiguration of educational labour, professional autonomy, and ethics within academia. While AI has long shaped engineering applications, its migration into educational research marks a conceptual shift, one that prompts scholars to increasingly view AI as a socio-technical system reconfiguring labour, agency, and ethics [
21,
22,
23]. By engaging with critical perspectives on digital governance and the moral implications of algorithmic decision-making and labour commodification [
13,
20,
24], and by framing our quantitative findings as exploratory illustrations rather than definitive evidence, we aim to offer institutional leaders, policymakers, and educators practical insights into managing these transformations ethically and sustainably. In doing so, the study contributes to international debates on how to align AI-based innovations in teaching and learning with the long-term goals of sustainable digital education, balancing technological advancement with the preservation of educational quality, equity, and human-centric values.
2. Theoretical Framework
The integration of AI into higher education must be understood as part of a broader organisational shift towards data-centric and algorithmic governance. This reflects global trends in how universities and schools are increasingly utilising data-driven systems to manage, evaluate, and even control academic work [
25,
26]. Rather than serving merely as pedagogical enhancements, AI systems have become central to institutional strategy, providing automated oversight of teaching and administrative tasks and thereby altering how educators perceive their roles, autonomy, and professional identity [
1,
4]. Scholars such as Noble [
9] and Komljenovic [
17] argue that this shift is inseparable from the expansion of EdTech markets, where educational data are treated as assets and teaching is reorganised for rent extraction, not purely for pedagogical benefit. This transformation raises critical ethical concerns about the balance of power, the erosion of professional agency, and the integrity of academic work across diverse educational contexts.
A key concept for understanding these dynamics is algorithmic management, which describes how digital technologies restructure professional work by displacing human judgment, reducing pedagogical discretion, and intensifying surveillance and control through data analytics [
4]. In education, algorithmic management might manifest in AI-driven dashboards that evaluate faculty performance based on student satisfaction scores, grading turnaround times, or adherence to standardised curricula. Such practices introduce new pressures and norms for academic staff, potentially reshaping teachers into data-monitored operators rather than autonomous professionals exercising judgment and pedagogical creativity [
1]. Hall [
10] characterises this re-engineering of academic labour as a new phase of proletarianisation, in which educators’ autonomy is subsumed under metric-driven platform logics.
This shift aligns with Shoshana Zuboff’s [
20] notion of a surveillance capitalist paradigm, wherein data-intensive monitoring commodifies human interactions and outputs. In universities, everyday pedagogical activities, such as lecturing, mentoring, or assessing, increasingly generate data that are recorded and algorithmically analysed. This enables unprecedented levels of institutional control and optimisation, but at the cost of encroaching on faculty autonomy and privacy. The conversion of teaching into a stream of data points raises serious ethical issues: Who owns and controls these data? How are analytics being used to make decisions about teaching quality or employment? These concerns highlight pronounced power asymmetries that often favour institutional and commercial interests over the rights and agency of individual educators [
2,
5]. Preston [
11] further contends that AI intensifies these asymmetries by positioning educators as conscious linkages in a machine-led knowledge production chain, amplifying alienation rather than liberating academic labour.
Within this landscape, Karl Marx’s theory of alienation provides a robust framework to critically explore AI’s impact on educators. Marx [
18] identified four dimensions of alienation under capitalist labour: alienation from the product of one’s labour, from the labour process, from one’s human essence or species-being, and from fellow workers. In the context of AI in education, these translate to distinct but interrelated forms of estrangement that educators may experience.
2.1. Alienation from the Product of Labour
Educators may feel disconnected from the outcomes of teaching (e.g., syllabi, lesson materials, and student learning and assessment) when these are heavily mediated or even generated by AI. For example, if an AI system autogenerates lesson content or grades student work, teachers might sense that the “product” of teaching is no longer their own, eroding pride and ownership in their work. Noble [
9] famously documented how course materials in online platforms are detached from their creators, foreshadowing similar dynamics with contemporary AI tools.
2.2. Alienation from the Process of Labour
The methods and processes of teaching can become prescribed by AI tools (e.g., algorithmic recommendations for instructional pacing, or automated tutoring systems that guide student interactions). Educators may find their day-to-day pedagogical decisions constrained by AI-driven protocols and scripts, leading to a loss of agency in how they teach. This resonates with the experience of workers subject to mechanised workflows in Marx’s analysis, now reflected in teachers adapting to AI-driven workflows.
2.3. Alienation from Species-Being (Professional Identity and Creativity)
Teaching is not just a technical task—it is a professional and ethical practice bound up with personal identity, creativity, and care. Under extensive AI governance, educators might feel that their roles are reduced to caretakers of machine-driven processes, diminishing the creative and moral fulfilment they derive from the profession. This existential alienation is evident if teachers feel their work lacks meaning or that their values are at odds with algorithmic imperatives, for instance, if a teacher’s pedagogical ethos of holistic education clashes with AI’s emphasis on measurable outcomes. Hall [
10] links this to a wider loss of academic agency, while Preston [
11] warns of an emerging ‘AI Taylorism’ that fragments scholarly creativity.
2.4. Alienation from Interpersonal Relations
Education is fundamentally relational, built on teacher–student and collegial relationships. AI-mediated communication tools, learning analytics, and surveillance systems can introduce a sense of distance or mistrust in these relationships. Teachers may feel isolated or in competition with peers if AI metrics are used to compare performance, and they might sense a weakening of the teacher–student bond if interactions are increasingly routed through or observed by machines. This reflects Marx’s notion of workers being alienated from each other when social relations are governed by production metrics and external control.
Using this theoretical lens, our study anticipates that AI integration, especially when implemented in a top-down, non-consultative manner, could produce measurable feelings of alienation among educators. At the same time, we recognise that contemporary scholars of digital labour stress not only the risks but also avenues for resistance and re-appropriation of technology. Critical educational technologists have argued for approaches like participatory design, co-creation of AI tools with teachers, and stronger professional governance to mitigate alienating effects [
27,
28]. These approaches align with the broader notion of sustainable digital education, wherein technology adoption is balanced with human-centric values, ensuring that innovations enhance rather than erode the social foundations of education. In line with the exploratory nature of our empirical analysis, we treat the following quantitative findings as illustrative evidence that supports, but does not conclusively prove, the theoretical patterns outlined here. Relatedly, in the next sections, we detail our empirical approach to investigating these issues and discuss findings that highlight both the challenges and possible pathways towards more sustainable, ethically aligned AI integration in educational settings.
3. Methodology
This study adopts an exploratory, quantitative case study design to empirically examine educators’ experiences of alienation stemming from AI integration in higher education. Rather than testing confirmatory hypotheses, the aim is to provide illustrative evidence that grounds our Marxian analysis in real-world perceptions. Consequently, grounded in Marx’s theoretical dimensions of alienation, our research explores the ethical, emotional, and professional impacts of AI-driven work transformations on educators. By developing and administering a theory-informed survey instrument, we investigate how algorithmic management and AI tools are reshaping academic labour and whether these changes align with or undermine sustainable educational practices. Because the design is cross-sectional and self-report-based, causal inferences cannot be drawn, and results are interpreted as exploratory patterns to inform future research. The methodology is designed to yield insights that are transferable to a variety of international higher education contexts undergoing digital transformation.
3.1. Participants and Context
We collected data from a diverse sample of 395 educators working primarily in higher education institutions, with some representation from secondary and vocational education, in Northern Cyprus. This regional context provides a valuable case study of early-stage AI adoption: local universities and schools have shown enthusiastic interest in educational technologies, supported by institutional policies, but AI integration remains nascent and uneven. Studying this context allows us to capture educators’ initial encounters with AI-driven changes—insights that are relevant to other global institutions at similar early-to-mid stages of digital transformation. The Northern Cyprus setting also reflects a mix of socio-cultural and economic conditions common to many developing or small-country contexts, offering a perspective on how sustainable digital education initiatives might unfold beyond the usual focus on large Western education systems.
Participants were recruited through a combination of convenience and snowball sampling via established educator networks. We aimed for broad representation across gender, academic rank, institutional affiliation, and prior exposure to AI. The final sample included university faculty (approximately 45%), secondary school teachers (40%), primary educators (10%), and a small number of vocational/alternative educators (5%). A substantial proportion reported intermediate (50%) or advanced (25%) familiarity with AI technologies in education, ensuring that many respondents could provide informed opinions on AI tools. This diversity in background and experience allowed us to explore whether perceptions and experiences of AI-related alienation differ by educational level or by educators’ digital competencies.
3.2. Research Design and Instrumentation
We adopted a cross-sectional survey design using a structured questionnaire comprising 20 Likert-scale items plus demographic questions. The instrument was designed to measure five constructs corresponding to the theoretical dimensions of alienation, along with an index of overall AI perceptions:
Alienation from the product of labour (4 items)—“I feel that AI-driven tools make it harder for me to take pride in my students’ learning outcomes”. “AI tools used in grading and assessment distance me from the impact of my teaching”. “I feel that my students’ success is less connected to my efforts due to AI-driven support systems”. “AI involvement in instructional outcomes reduces my sense of ownership over student progress”.
Alienation from the labour process (4 items)—“Using AI in lesson planning limits my ability to make decisions about my teaching methods”. “AI tools have reduced the creative aspects of my work, making teaching feel more repetitive”. “I feel less engaged with the teaching process due to the standardisation imposed by AI”. “AI reliance in teaching undermines my ability to adapt lessons to my students’ needs”.
Alienation from professional identity (species-being) (4 items)—“The integration of AI into teaching diminishes the meaningful aspects of my role”. “AI in the classroom restricts my ability to express my teaching style and values”. “I feel less fulfilled in my work because AI has automated many of the tasks that made teaching rewarding”. “AI-driven routines prevent me from exercising my full professional potential”.
Alienation from interpersonal relations (4 items)—“AI tools reduce my opportunities to collaborate with other teachers”. “I feel more isolated in my teaching role due to AI-driven communication and interaction tools”. “The use of AI in my classroom makes me feel distanced from my students”. “AI-mediated tasks limit my direct engagement with students and their individual learning needs”.
Overall perceptions of AI (4 items)—“The benefits of AI in education outweigh the downsides”. “AI is an effective tool for enhancing learning outcomes”. “I am concerned that AI will lead to a further loss of control over my teaching methods”. “I believe that AI in education is here to support teachers, not replace them”.
Each alienation dimension’s items were crafted by adapting Marx’s taxonomy of estranged labour [
18] to the education context, ensuring content validity of the measures. Responses were given on a 5-point Likert scale from strongly disagree (1) to strongly agree (5), where higher scores on alienation items indicate greater alienation, and higher scores on positively worded AI perception items indicate more optimism about AI.
The survey was provided in both English and Turkish (the primary languages of instruction in Northern Cyprus) to maximise accessibility and participation. We conducted careful translation and back-translation to maintain consistency of meaning across languages. The questionnaire was distributed through both online (Google Forms) and paper formats, accommodating varying levels of digital literacy among educators and reducing sampling bias. Data collection spanned a 12-week period in 2024. The 12-week data collection period reflected the time required to reach a sufficient and diverse sample of educators and was not timed to coincide with a specific curricular event or AI rollout, but rather to accommodate participant availability. Participation was voluntary and anonymous—no incentives were used beyond a general appeal to contribute to research on technology in education. We adhered to ethical research standards, obtaining informed consent from all participants and securing Scientific Research Ethics Committee approval. Participants could skip any questions or withdraw at any time, and we refrained from sending repeated reminders to avoid coercion, thereby preserving the voluntary nature of participation.
3.3. Data Analysis
We employed a multi-step statistical analysis to examine the data. First, we conducted descriptive statistics to understand the central tendencies and variances for each survey item and construct. Next, we evaluated the internal consistency of each multi-item scale using Cronbach’s alpha. All four alienation subscales demonstrated good to excellent reliability (α ranging from 0.78 to 0.85), indicating that the items coherently measure their intended constructs. By contrast, the AI Perceptions scale exhibited very low reliability (α = 0.42). Consistent with psychometric conventions (α < 0.60 is considered poor), we, therefore, treated any findings involving this scale as strictly exploratory and hypothesis-generating. That is to say, we treated this score as statistically fragile and exploratory, and not reliable for confirmatory inference.
To validate the construct structure of our instrument, we performed an exploratory factor analysis (principal component analysis with varimax rotation). The Kaiser–Meyer–Olkin measure (>0.80) and Bartlett’s test (p < 0.001) confirmed sampling adequacy and factorability. The analysis supported a four-factor solution corresponding to the alienation dimensions, which together explained a substantial portion of variance (over 70%). The items loaded strongly on their respective factors (loadings > 0.6, minimal cross-loadings), affirming the distinctness of each alienation construct. The Overall AI Perceptions items loaded on a separate factor, albeit with some dispersion, reflecting the attitudinal ambivalence captured by that scale.
We then computed Pearson correlation coefficients among all key variables. As expected, the four alienation dimensions were positively inter-correlated (r ranging ~0.4–0.6, p < 0.001), suggesting that educators experiencing one form of alienation (say, process alienation) are likely to experience others as well—a coherence consistent with Marx’s theory that these alienation facets stem from common structural roots. Notably, educators’ overall positive perception of AI was negatively correlated with each alienation dimension (r ≈ −0.3, p < 0.001), hinting that optimism about AI might buffer against feeling alienated.
To probe this further, we conducted linear regression analyses predicting each type of alienation from the Overall AI Perceptions score. In all four models, a more positive perception of AI was associated with lower alienation (standardised β between −0.45 and −0.76, p < 0.001). The relationship was strongest for alienation from the product of labour (β ≈ −0.76, explaining ~26% of variance), and comparatively weaker for alienation from professional identity (β ≈ −0.45, ~11% variance). Given the unreliable AI Perceptions measure, these regressions should not be interpreted causally—they serve only as exploratory indications of potential associations requiring confirmation with more robust instruments.
Finally, we explored potential moderating factors using subgroup analyses. We examined whether the link between AI perceptions and alienation differed by educators’ level (higher education vs. primary/secondary) or their self-rated AI familiarity. For this, we added interaction terms in regression models (e.g., AI Perception × HigherEd faculty). The interactions were not statistically significant (p > 0.05) for most comparisons, suggesting that the buffering effect of positive AI attitudes on alienation was fairly consistent across different educator subgroups. There was a minor trend that the youngest educators (under 30) showed a slightly weaker relationship between AI optimism and alienation, possibly indicating that early-career teachers might feel alienated by structural issues regardless of their enthusiasm for technology. However, this trend was not strong enough to draw firm conclusions and would require further qualitative exploration.
In summary, our analysis strategy allowed us to quantify educators’ alienation experiences and relate them to their attitudes about AI, while also checking the robustness of these patterns across various groups. All statistical findings involving AI Perceptions are presented as illustrative and exploratory in light of measurement limitations. Nonetheless, the combination of reliability checks, factor analysis, correlations, and regressions provide a comprehensive view of the data, laying the groundwork for interpreting the findings in light of our theoretical framework and the broader quest for sustainable, ethical digital education practices.
4. Results
This section presents the empirical findings from our survey of 395 educators, organised by the key research dimensions. Because the AI Perceptions scale demonstrated low internal consistency (α = 0.42), all statistical relationships involving that measure were interpreted as exploratory associations rather than confirmatory evidence. Apart from that, though, overall, the results reveal a nuanced picture: educators are experiencing notable forms of alienation in the context of AI integration, yet those who see AI in a positive light report these effects to be less severe. We detail the extent of each alienation dimension, followed by the statistical relationships between AI perceptions and alienation.
4.1. Descriptive Overview of Alienation and AI Attitudes
4.1.1. Alienation Levels
Educators reported moderate to high levels of alienation on all four dimensions (using a 1–5 scale, where 3 = neutral). The mean score for alienation from the product of labour was highest at 3.9 (SD = 0.8), indicating that many educators felt a loss of ownership and pride in outputs like lesson content and student achievements when AI was involved. Alienation from the labour process had a mean of 3.7 (SD = 0.7), reflecting substantial perceived constraints in how teaching tasks are carried out under AI systems. Alienation from professional identity averaged 3.5 (SD = 0.9), suggesting that a significant number of educators sensed an erosion in their sense of purpose or creativity. Alienation from interpersonal relations was somewhat lower at 3.2 (SD = 0.8) but still above the neutral midpoint, implying emerging concerns about relational distance or mistrust introduced by AI tools. Taken together, these descriptive results confirm that alienation is not a fringe experience even at an early adoption stage, aligning with Noble and Hall’s warnings that digital technologies can rapidly erode professional agency [
9,
10,
29].
4.1.2. Attitudes Towards AI
The Overall AI Perceptions scale had a mean of 3.3 (SD = 0.6), suggesting a roughly balanced split between optimistic and pessimistic views about AI in education across the sample. As noted, its low α (0.42) indicates that educator opinions are fragmented; thus, we refrain from drawing strong conclusions from this metric and treat any correlations as tentative. This ambivalence itself cautions against oversimplifying teacher attitudes as simply “pro-” or “anti-” AI.
4.2. Alienation from the Product of Labour
Among the four dimensions, alienation from the product of labour emerged as the most pronounced. Educators reported feeling detached from pedagogical outputs, such as curricula, lesson plans, assessments, and even student learning outcomes, when these became entwined with AI systems. Qualitatively, respondents echoed that using AI (for instance, AI-generated lesson content or automated grading software) sometimes made them feel that “the results don’t feel like mine”. They expressed diminished pride or sense of authorship, as if the creative and intellectual ownership of their work was partially ceded to the technology.
Statistically, this dimension also showed the strongest association with attitudes towards AI. As noted earlier, regression analysis found that a more positive perception of AI was associated with significantly lower alienation from the product (β ~ −0.76, p < 0.001). Given the exploratory nature of this regression, we interpret it as a tentative pattern: educators who regarded AI as a collaborative aid reported a greater sense of authorship, whereas sceptics felt AI-generated content was “foreign” to their teaching style.
Institutional Implication
To ensure sustainable adoption, leaders should involve teachers in content-related AI implementations, allow them to modify AI outputs, and frame AI as augmentative—not substitutive. Such participatory control directly addresses Marx’s concern with the loss of ownership over the product of labour and echoes Preston’s call for educator co-governance of AI tools [
11].
4.3. Alienation from the Educational Process
The second dimension, alienation from the process of labour, was also notably high. Educators described the teaching process as increasingly constrained by AI-driven workflows, e.g., having to follow scripted lesson paths recommended by intelligent tutoring systems, or adjusting their pace and content to satisfy the parameters of learning analytics dashboards. Many felt that the means of teaching were becoming dictated by external algorithms, “teaching by template” as one might characterise it, rather than by their training, creativity, or on-the-fly pedagogical decisions in response to student needs.
Our analysis showed that positive attitudes towards AI did correlate with lower alienation from the process, but this effect was more modest (β ~ −0.51,
p < 0.001) compared to the product dimension. This suggests that even optimistic educators, who might embrace AI tools, could still feel frustrated by the algorithmic rigidities those tools introduce, a finding consistent with Hall’s argument that metric governance curtails professional discretion [
10]. Indeed, an educator might like an AI recommendation system for providing helpful insights yet still chafe at the sense of being
led by the nose by that system. The limits of positive attitude here highlight that structural issues, like how much autonomy a teacher has to override or diverge from AI suggestions, play a critical role. Enthusiasm for technology cannot fully compensate if the technology imposes overly standardised practices.
From a policy perspective, this finding calls for balancing efficiency and innovation with preservation of pedagogical discretion. To maintain sustainable teaching practices, institutions should avoid an over-reliance on automation in areas that require human nuance. For example, algorithmic scheduling or content sequencing systems should be adjustable, allowing teachers to make exceptions or modifications based on their classroom judgment. Training programs could emphasise that AI outputs are suggestions, not mandates. By keeping the human in the loop of educational processes [
30], schools and universities ensure that teachers remain active agents in shaping how technology is used, rather than passive executors of AI-driven instructions [
4]. This balance is crucial for educator buy-in and for sustaining the quality of teaching and learning: if teachers feel they have
no room to teach, the innovation will ultimately fail its educational mission.
4.4. Alienation from Professional Identity (Species-Being)
Perhaps the most profound form of alienation observed was in terms of educators’ professional identity and moral agency, analogous to what Marx termed alienation from one’s “species-being”. Educators voiced concerns that the increasing role of AI in their work was at odds with the core values and creative fulfilment that drew them to the profession. Some respondents feared a drift towards a dehumanised teaching model, saying it made them feel “
less like a mentor and more like a cog in a machine”, mirroring Zuboff’s [
20,
24] and Komljenovic’s [
16] criticisms of surveillance and platform logics.
This dimension of alienation was demonstrated as relatively less sensitive to individual attitudes about AI. Our regression showed only a mild reduction in identity alienation among those positive about AI (β ~ −0.45, p < 0.001, lowest R2 among the four dimensions). In practical terms, regardless of whether an educator was enthusiastic or wary about AI’s benefits, they often shared similar existential concerns about what teaching is becoming under AI governance. This suggests that some aspects of alienation are structural and cultural, not easily assuaged by simply encouraging a more positive mindset. It points to an important insight: ensuring sustainability in digital education is not just a technical or attitudinal challenge, but a deeply ethical and cultural one.
Addressing this form of alienation likely requires institution-wide reaffirmation of the core values of education. Universities and schools must actively engage with questions like: How do AI tools serve our educational mission? Do they uphold or undermine academic freedom, creativity, and the teacher’s role as a moral and intellectual leader? Institutions might develop clear ethical guidelines for AI use that enshrine principles of human dignity, equity, and the primacy of pedagogical goals over mere efficiency. Including educators in drafting these guidelines (as part of participatory governance, discussed later) can empower them to be co-creators of a vision of AI that aligns with their professional identity. Furthermore, educators should receive support in developing AI ethics literacy, understanding the implications of AI on privacy, bias, and fairness, so they feel equipped to integrate AI in ways that reinforce, not erode, their professional integrity [
31]. Only by bridging the gap between innovation and professional values can AI integration be truly sustainable in the long run.
4.5. Alienation from Interpersonal Relations
Alienation from interpersonal relations, the distancing between teachers and students or among colleagues, was the least severe form reported, yet it remains a significant concern. The mean score (3.2) hovered just above neutral, and qualitative comments were mixed. Some educators did not perceive major relational changes yet, especially if AI use was still minimal in their day-to-day teaching. However, many others may have been expressing a sense of subtle transformation in their work environment, suggesting that interactions with students were becoming increasingly mediated by digital platforms, and that institutional meetings, now centred on data and dashboards, might be shifting the focus from collegial collaboration to performance-based comparison. Such observations signal early signs of relational strain or a creeping sense of isolation attributed to technological mediation.
The statistical analysis found that more positive AI perceptions correlated with lower relational alienation (β ~ −0.53, p < 0.001), which is notable. Even optimistic respondents recognised early signs of relational strain, reinforcing arguments that AI can mediate and potentially weaken pedagogical relationships unless deliberately human-centred. Those who believed AI could enhance communication (for example, through analytics that identify students in need, or AI tutors that free up teacher time) tended to feel less of a relational gap. In contrast, teachers suspicious of AI might also be more attuned to its potential to interfere in human relationships, thus feeling more alienation here. Yet even the optimists did not score extremely low on this dimension, implying that some relational challenges are recognised across the board.
To foster sustainable digital education, it is crucial for institutions to proactively protect and enhance human relationships in tandem with technological advancement. Practical steps might include emphasising collaborative use of AI (for instance, using AI tools that encourage teacher–student interaction rather than replace it), and maintaining forums for teacher collaboration that are not solely data-driven. Schools can cultivate a culture where data from AI are used to support teachers in reaching out to students (e.g., identifying quiet students who need engagement) rather than to surveill or rank teachers. Similarly, if AI is used to monitor teaching, the data should be handled in a collegial, developmental manner rather than pitting faculty against each other. By designing AI implementations that are human-centred and relationship-enhancing, educational institutions can mitigate alienation in this dimension. This aligns with the concept of human-in-the-loop and with principles of social sustainability, ensuring that the introduction of technology does not erode the social fabric of the educational community.
4.6. Summary of Quantitative Relationships
All four alienation dimensions were significantly present among educators facing AI integration at meaningful levels. In exploratory models, overall AI optimism correlated with lower alienation, but the buffering effect was partial and dimension-specific. Crucially, because of the unreliable AI Perceptions measure, these associations should be viewed as preliminary signals for further study, not definitive causal findings. This suggests a potentially important mediating role of agency and mindset: when educators feel they have some control over or alignment with the AI tools (hence a positive view), they are buffered against the negative feelings of alienation. However, the protective effect of positive attitudes was incomplete—it varied by dimension and never fully eliminated alienation. Especially for deeper issues of identity and process, structural constraints imposed by AI overshadowed personal optimism.
These results highlight a central paradox: AI is heralded as an innovation to improve teaching and learning, yet current implementation practices risk alienating the very professionals responsible for educational quality. Sustainable digital education, therefore, hinges on participatory implementation, ethical governance, and an unwavering focus on human-centric values. In the next section, we interpret these exploratory findings through the expanded Marxian framework and propose concrete governance recommendations.
In the following section, we discuss the implications of these findings in depth, connecting them back to our theoretical framework and drawing out recommendations for policy and practice. Our goal is to outline how AI’s undeniable potential in education can be harnessed in a way that is ethically sound and sustainable, enhancing teaching and learning while preserving the core elements that make education a meaningful, human endeavour.
5. Discussion
The findings of this study illuminate a pivotal tension in the global movement towards AI-enabled education: educators exhibit cautious optimism about the promise of AI even as they experience concrete forms of alienation in their daily work. Because the statistical links with AI Perceptions are exploratory, we interpret them as indicative patterns rather than causal proof. This dual reality captures the crux of sustainable digital transformation in education—the need to balance innovation with well-being, and efficiency with ethics. While our study focused on educators in Northern Cyprus (an illustrative case of early-stage AI adoption), the insights are broadly relevant to educational institutions worldwide navigating similar transitions.
At its core, our analysis suggests that AI in education is not merely a pedagogical tool but a powerful socio-technical regime that is reconfiguring academic labour, professional identities, and ethical norms. This resonates with Noble’s critique of ‘digital-diploma mills’ [
9], Hall’s depiction of the proletarianised academic [
10], Preston’s warnings about AI Taylorism [
11], and Komljenovic’s account of platform rentiership in higher education [
1,
16]. These changes echo patterns seen in other sectors under digital transformation, reinforcing that education is not immune to the broader dynamics of algorithmic management and digital capitalism. The Marxian lens of alienation proved useful in naming and analysing the discontents that accompany AI’s advance: a loss of ownership, constrained autonomy, fragmented professional selves, and strained relationships.
Alienation from the product of labour was the most profound. Educators felt a detachment from curriculum and assessment outputs when these were heavily influenced or produced by AI. This finding directly mirrors Marx’s original concern and Noble’s documentation of academic content appropriation [
9]. The strong inverse relationship we found between positive AI perceptions and product alienation (β ≈ −0.76) suggests that co-creative use of AI can mitigate estrangement: when teachers shape AI outputs, they retain authorship. This points to a recommendation for practice: involve educators in iterative design loops, allow editing rights over AI-generated materials, and visibly attribute final content to teacher–AI collaboration. For instance, if an AI tool generates quiz questions or lesson plans, teachers should have opportunities to curate, edit, and contribute to those materials, thereby co-producing the final product and keeping their authorship intact.
Alienation from the educational process likewise reflected educators’ frustration with AI-driven workflows. Teachers reported feeling governed by AI-driven protocols and metrics, which can reduce their autonomy and flexibility. Here, we see a nuanced outcome: while educators who liked AI felt this loss less severely, even they could not escape it entirely (reflected in the moderate β ≈ −0.51). This underlines that structural conditions, such as an institution’s decision to rigidly enforce data-driven teaching schedules or standardised algorithmic assessments, have a significant impact. Hall’s argument that managerial metrics subsume academic discretion [
10] is borne out here. For sustainable AI integration, it is crucial that efficiency gains do not come at the expense of professional judgment. Universities should critically evaluate where AI automation truly adds value and where it might undermine the art of teaching. As an example, an AI system might be excellent at flagging students who are falling behind, but the intervention (how to help that student) should be left to the teacher’s discretion. Preserving this human element not only combats alienation but also likely leads to better educational outcomes, since teaching often relies on tacit knowledge and emotional intelligence that AI cannot replicate.
The area of professional identity (species-being) touches on the sustainability of the teaching profession itself. A key insight from our data is that improvements in attitude or training alone may have limited effect in this domain. That is to say, attitudes or training alone scarcely shifted existential concerns. If educators universally feel that the soul of their profession is under threat, this signals a need for systemic and cultural responses. This supports Preston’s claim that AI intensifies rather than relieves academic exploitation [
11]. One avenue is to develop an explicit institutional ethos for AI: a statement of principles or values that guides why and how AI is used in teaching. If such an ethos is co-created with educators and perhaps even students, it can help reclaim a sense of purpose. For example, an institution might declare, “We use AI to enhance
human learning and development, not to replace human educators”, and outline commitments (like always having human oversight in critical decisions, or limiting AI to supportive roles). Backing up these words with concrete policies, such as limits on automation in grading or ensuring that faculty evaluations are not solely metric-driven, would demonstrate to educators that their core professional values are being safeguarded. This alignment is essential for the
social sustainability of digital education initiatives: without it, we risk scenarios where talented teachers disengage or leave the profession, which would undermine educational quality and continuity.
When it comes to interpersonal relations, our findings warn of subtle erosions in trust and community that could accumulate over time—this dimension was the least severe but nonetheless significant. The relatively lower alienation here might indicate that strong professional relationships and teaching communities have so far buffered the intrusions of technology. But the statistically significant correlations we observed (and numerous anecdotal comments from participants) suggest vigilance is needed. Regarding this, Komljenovic’s platform logic [
17] cautions that datafication can pit workers against each other. An AI-rich environment could inadvertently foster competition (through constant analytics comparisons) or distance (through mediated interactions). To counteract this, institutions should invest in community-building in parallel with tech-building. For instance, when introducing AI systems for monitoring student progress, schools might also implement regular team meetings where teachers collaboratively interpret and respond to that data, rather than each teacher individually receiving a dashboard in isolation. This way, AI becomes a tool for collaboration, not isolation. Additionally, emphasising the augmentative nature of AI in parent–student–teacher communications can maintain trust; for example, clarifying that an AI alert to a student’s difficulty is a prompt for a human conversation, not an automated reprimand or impersonal action.
Across all dimensions, a consistent theme emerges—participatory and ethical governance is indispensable. Positive perceptions that buffer alienation arise when educators feel respected and empowered in AI decision-making. This aligns with broader technology adoption research and with Selwyn’s call for human-centred EdTech practice.
Our findings must be read alongside documented educational benefits of AI (adaptive learning, early-warning analytics, and personalised feedback). We do not dispute these potentials; rather, we argue, in line with critical scholars, that realising AI’s promise sustainably demands deliberate attention to labour ethics and democratic oversight. Implemented top-down, AI risks exacerbating burnout and inequality; implemented with stakeholder co-creation, it can enhance quality and equity, supporting SDG 4.
In conclusion, AI integration is a double-edged sword. It can transform teaching and learning for the better yet simultaneously threaten the pillars of educational labour. Ensuring sustainability, therefore, requires a critical labour ethics lens: recognising teachers as central stakeholders, safeguarding their rights and well-being, and framing AI as a means to enrich (not replace) human educational practice. Only through such a balanced approach can digital transformation align immediate innovation with the long-term imperative of an ethical, inclusive, and human-centred education sector. After all, while the findings may not generalise across all educational systems, the case of Northern Cyprus serves as a revealing example of the frictions and ethical dilemmas emerging in transitional EdTech contexts.
6. Towards Sustainable and Ethical AI Integration in Education
To align AI adoption with the values of sustainable digital education, we propose a set of strategies and governance measures informed by our findings. These recommendations emphasise human-centric, ethically grounded approaches that can help transform the current AI-driven trends into forms that support long-term educational sustainability and social responsibility. They are congruent with emerging international guidelines—for example, the European Union’s proposed AI Act categorises many educational AI applications as “high-risk”, necessitating stringent oversight [
32,
33]. Our recommendations also echo global calls for balancing innovation with equity and ethics in education.
6.1. Establish Clear Human Oversight in AI-Augmented Teaching
Educational institutions must delineate the boundaries of automation in teaching and administrative processes. Certain critical tasks, especially those involving high-stakes decisions about students (e.g., final grading and disciplinary actions) or about faculty (performance evaluations and workload allocation), should not be left solely to AI systems. A “human-in-the-loop” principle should be adopted, wherein AI can inform or recommend, but human educators or administrators exercise final judgment [
1,
31]. This ensures that context, professional ethics, and compassion inform decisions, maintaining accountability and trust. For instance, if an AI flags a student as at-risk of failing, a counsellor or teacher should interpret that information and decide on intervention, rather than an automated email being dispatched without personal engagement. Such safeguards not only protect students and staff from errors or biases in AI but also help sustain an educational culture where human expertise and empathy remain central.
6.2. Align AI Governance with International Ethical Frameworks
Schools and universities should proactively align their AI strategies with the best practices and regulations emerging globally. The EU AI Act, for example, demands transparency, explainability, and human oversight for high-risk AI—principles highly relevant to education. Even in jurisdictions where such regulations are not (yet) law, adopting them voluntarily signals an institutional commitment to responsible innovation. Concretely, this might mean maintaining documentation on how an AI tool makes decisions (so it can be explained to those affected), regularly auditing AI systems for biases or errors, and ensuring users are informed when AI is in use (no “secret algorithms” affecting students or teachers). By holding themselves to these standards, institutions contribute to a sustainable innovation ecosystem, one where technology advances hand-in-hand with public trust and ethical accountability [
30]. Over time, aligning with global ethical norms may also simplify compliance if/when such norms become legally required.
6.3. Adopt Participatory Design and Co-Governance Approaches
Rather than top-down implementation, institutions should involve educators (and students where appropriate) in the selection, design, and deployment of AI systems. A participatory design approach, rooted in human-centred and socio-technical systems design, can greatly enhance the contextual fit and acceptance of AI tools. In practice, this could take the form of cross-stakeholder committees or working groups for any major AI project—for example, a faculty committee to pilot and feedback on an AI grading assistant before it is campus-wide. By incorporating on-the-ground insights, the technology is more likely to address real needs and avoid obvious pitfalls. Moreover, when teachers have a hand in shaping AI tools, they are more likely to perceive them as empowering rather than alienating. Participatory governance also fosters a sense of shared ownership of innovation, which is crucial for the sustainable uptake of new practices. Indeed, research has shown that such inclusive approaches lead to AI systems that are not only more ethically sound but also more effective in practice [
27].
6.4. Strengthen Professional Ethical Codes and Sector Standards
The education sector should update its professional codes of ethics, accreditation standards, and quality guidelines to directly address AI and data-driven practices. Bodies such as teacher unions, academic associations, and accreditation agencies have a role to play in articulating norms for ethical AI use in education. These could include principles like “AI should support, not substitute, the professional judgment of educators”, “data collected by educational AI must be transparent and used only for legitimate educational purposes”, and “educators should receive training in digital competencies and AI ethics”. By embedding such tenets into the standards that institutions are held to, we ensure a baseline of ethical practice. For instance, an accreditation criterion could be that a university has an AI ethics committee or a guideline for fair use of student data. Professional development programs should also reflect these standards, preparing educators with the digital literacy and critical skills to use AI responsibly [
34,
35]. Over time, a shared ethical framework across the sector helps prevent a “race to the bottom” in which competitive pressures might otherwise push institutions to adopt AI in unscrupulous ways for short-term gains.
6.5. Implement Accountability and Redress Mechanisms
Even with the best frameworks, issues will arise. Thus, robust channels for accountability are essential. Institutions should establish clear grievance mechanisms and feedback loops for teachers and students to voice concerns about AI use. For example, if a teacher believes an AI scheduling system is overburdening them or a student feels an AI grading mistake impacted them, there should be a process to review and address these concerns. An ombudsperson for AI ethics or an AI oversight board could be designated to handle such cases impartially. This goes hand-in-hand with maintaining transparency; people should know what AI systems are in use and how decisions are made, so they can identify potential errors or biases. The presence of accountability structures not only helps correct problems, but also has a preventive effect: if developers and administrators know that their AI implementations can be challenged and audited, they are more likely to be careful and ethical from the start [
36]. In the long run, this cultivates an environment of trust. Educators and learners are more likely to embrace AI tools when they know there is recourse if something goes wrong—that the institution has their back and will keep technology in check in service of human-centric goals.
6.6. Foster a Culture of Continuous Evaluation and Learning
Sustainable digital education means continuously learning and adapting. Schools and universities should treat AI integration as an iterative process, not a one-time procurement. They should regularly evaluate the impact of AI tools on teaching practices, student outcomes, and teacher well-being and solicit feedback and conduct studies (like this one) within their own context. This evidence-based approach allows institutions to identify what is working and what is not—and for whom. It might reveal, for example, that a certain AI tool improves standardised test scores but at the cost of narrowed curriculum or teacher frustration, prompting a re-balancing of priorities. A feedback culture also empowers educators: it signals that their experiences matter in shaping the trajectory of innovation. Importantly, lessons learned should be shared (where possible) across the education sector, contributing to collective knowledge on how to integrate AI in line with the Sustainable Development Goals (such as quality education and decent work conditions). In essence, a sustainable approach to AI is one that remains reflective and open to change, ensuring the technology continues to serve educational aims rather than dictate them.
In summary, the pathway to sustainable, ethical AI integration in education comprises inclusive governance, robust ethical standards, and ongoing accountability. These strategies reinforce each other: participatory design leads to better standards, strong standards make accountability clearer, accountability builds trust for participation, and so on. By implementing such measures, educational institutions can lead the digital innovation wave responsibly, harnessing AI’s benefits for teaching and learning while steadfastly guarding the professional integrity, equity, and human-centric values that define true educational sustainability.
7. Limitations and Future Research
While this study provides valuable insights into AI-related alienation among educators and offers guidance for sustainable digital education practices, it is important to acknowledge its limitations and suggest directions for future research. All quantitative relationships involving the AI Perceptions scale must be interpreted cautiously, as the scale’s low reliability (α = 0.42) renders those findings exploratory rather than confirmatory.
7.1. Contextual Scope
First, our empirical data were collected in Northern Cyprus, which has its own socio-cultural and institutional characteristics. This context of early-stage AI adoption and relatively small higher education institutions may differ from larger or more technologically advanced systems. Caution is warranted in generalising the prevalence of specific alienation levels to other settings. Future comparative studies could juxtapose cases from highly digitised systems (e.g., Finland and Singapore) with those from under-resourced regions, thereby testing Hall and Komljenovic’s claims that platform capitalism manifests differently across geopolitical contexts [
10,
16]. Such comparative work can illuminate how local policies, resources, or cultural attitudes towards technology influence the sustainability of AI integration. Crucially, as global education moves to align with the Sustainable Development Goals (like SDG4 on quality education), understanding these contextual nuances will help tailor strategies that ensure no educator is left behind in the digital transformation. With this, we not only caution against assuming the representativeness of our sample for global education but also aim to highlight its exploratory relevance.
7.2. Methodological Design
Second, this study relied on self-reported survey data with a cross-sectional design. While this allowed us to quantify perceptions at one point in time, it cannot establish causality. Longitudinal, mixed-method designs are needed to see whether alienation intensifies, stabilises, or diminishes as AI implementation matures or as participatory governance structures are introduced. Moreover, the relatively low Cronbach’s alpha of the AI Perceptions scale (α = 0.42) highlights the complexity of attitudes that we might not have fully captured with broad survey items, and points to the need for psychometrically sound instruments that disentangle enthusiasm for AI capabilities from trust in institutional governance. Such a low score severely limits the interpretability of results based on this measure. Future research could develop more nuanced instruments to differentiate between types of positivity (e.g., excitement about AI’s capabilities versus trust in the institution’s implementation of AI) and types of negativity (e.g., fear of job replacement versus ethical objections). A more granular understanding of teacher attitudes could inform more targeted interventions—for instance, if fear of job loss is a major factor, policymakers can focus on messaging and policies guaranteeing that AI will not replace teachers but assist them.
7.3. Mixed-Method Approaches
Third, our study would benefit from complementary qualitative research. We gained breadth by surveying hundreds of educators, but depth was limited. In-depth interviews or participatory action research could reveal how specific AI tools (e.g., automated proctoring and generative-AI lesson planners) map onto the four Marxian alienation dimensions. Follow-up interviews, focus groups, or ethnographic observations could enrich the findings by uncovering how exactly AI tools are being experienced in the classroom or faculty office, and why certain feelings arise. For instance, an interview might reveal a story of a teacher whose innovative pedagogy was hampered by an inflexible AI system, a narrative that provides context to the alienation scores. Qualitative data could also highlight positive experiences where AI truly augmented teaching in sustainable ways, offering models to emulate. Additionally, participatory action research, where researchers work with educators to co-design solutions (like new training modules or governance policies) and then study their implementation, could directly bridge research and practice in advancing sustainable digital education.
7.4. Technological Scope
Fourth, the AI in education landscape is rapidly evolving. Emergent applications, such as conversational tutoring agents or predictive staffing algorithms, may introduce new forms of alienation—or, conversely, opportunities for re-enchantment of academic labour. As new AI applications emerge, new forms of alienation or new benefits might surface. Continuous research is needed to keep the analysis up to date. For instance, the recent surge of interest in generative AI like GPT-based tools in classrooms raises questions: Will having an AI that can converse and answer questions reduce alienation by giving teachers more bandwidth, or increase it by taking over aspects of teacher–student interaction? Investigating such questions would extend our work. It also connects to the technical side of sustainability—future studies could explore the environmental impact of AI in education (e.g., the carbon footprint of large educational AI systems) as part of the broader sustainability calculus, a topic we only briefly touched upon [
13].
7.5. Negative Wording
Fifth, the consistent use of negatively worded items across the alienation subscales may have shaped respondents’ perceptions by predisposing them to interpret AI in a critical light. While the AI perception scale included a mix of positively and negatively worded items, the alienation constructs did not. This asymmetry could introduce response bias by framing AI’s impact primarily in terms of constraint or loss. Future iterations of the instrument would benefit from incorporating both positively and negatively framed items across all constructs to elicit more balanced and nuanced responses.
7.6. Linking to Outcomes
Finally, an area for future research is to examine the downstream effects of the alienation we documented. Does educator alienation from AI affect student outcomes or student attitudes? Does it correlate with teacher burnout, attrition, or job satisfaction in measurable ways? Investigating such links would empirically substantiate our theoretical claim, drawn from Preston [
11], that alienation undermines the long-term viability of AI reforms. Empirical validation of these impacts would powerfully underscore why addressing alienation (and by extension, ensuring ethical AI integration) is not just a “nice-to-have” but indeed crucial for achieving the promised gains of educational technology. Conversely, positive-deviance cases where educators report low alienation despite high AI uptake could reveal best practices for scaling humane, sustainable AI.
In summary, addressing these limitations, through cross-context comparisons, longitudinal mixed-methods, instrument refinement, technology tracking, and outcome linkage, will advance a comprehensive understanding of how AI can serve quality, equity, and human well-being in education while avoiding the alienating pitfalls identified by Marxian and critical scholars.
8. Conclusions
This study set out to critically examine the ethical, structural, and professional implications of AI integration in education through the lens of Marx’s theory of alienation, augmented by contemporary critiques from Noble, Hall, Preston, and Komljenovic, with an eye towards what these mean for sustainable digital education. Using survey data from educators in Northern Cyprus, a representative context of early AI adoption, we reported exploratory associations rather than definitive causal effects, owing to the low reliability (α = 0.42) of the AI Perceptions scale. Even so, we found clear evidence that initial steps into AI-enhanced education bring about notable experiences of alienation among teachers.
The most acute form was alienation from the product of academic labour, indicating diminished authorship and ownership over teaching materials, student outputs, and assessment processes when mediated by AI. Significant levels of alienation were also reported in the process of teaching (feeling constrained by AI-driven protocols), professional identity (feeling that the ethos of teaching is undermined), and interpersonal relations (feeling isolation due to digital mediation). These patterns echo Marx’s four-fold alienation and corroborate Noble’s and Hall’s warnings that digital platforms can dispossess educators of both product and process.
A key empirical insight was the role of educators’ positive perceptions of AI. Teachers who viewed AI favourably tended to experience lower alienation across all dimensions, most pronounced for alienation from the product of labour. However, because the perception metric is psychometrically weak, we frame this buffering effect as tentative and hypothesis-generating. Optimism can mitigate, yet not abolish, structural alienation. This nuance underscores policy prescriptions: strengthening professional autonomy and participatory design matters more than mere attitude change.
From an ethical and sustainability standpoint, AI integration cannot be treated as a neutral technical upgrade. It represents a restructuring of labour, authority, and values. Left to a profit-driven or techno-centric agenda, AI risks intensifying what Preston calls “AI Taylorism”, eroding species-being and turning educators into data-monitored operators. To avoid this, our study highlights the urgent need for proactive governance that emphasises transparency, professional autonomy, and ethical co-design.
In practical terms, AI should be implemented as an augmentative tool, one that supports and amplifies teachers’ capabilities, not one that supplants their judgment or employment. Adopting a “human-in-command” model, aligning policies with emerging statutes such as the EU AI Act, and embedding accountability through ethics committees and redress mechanisms are concrete steps towards that goal. Re-skilling initiatives must include critical digital literacy and ethical reflection, empowering educators to question and steer AI. Equally crucial are opt-out provisions and the right to contest algorithmic decisions, protecting academic freedom and privacy.
Ultimately, the sustainability of AI integration depends not on technological sophistication alone, but on the social and institutional frameworks within which it is embedded. When governed participatorily, AI can help achieve SDG 4 by relieving administrative burdens and enabling personalised learning; when imposed top-down, it can exacerbate burnout, inequality, and distrust. Incorporating AI is thus a path that must balance innovation with inclusion, efficiency with empathy, and data with democracy.
We, therefore, advocate a critical-labour-ethics vision of digital education: AI is harnessed only when it empowers educators and learners, upholds dignity and agency, and contributes to resilient, equitable systems. Future research should deploy longitudinal, mixed-method designs with validated attitude scales to confirm or refine the exploratory patterns reported here. By keeping human-centred questions at the forefront, stakeholders can ensure that digital transformation delivers immediate gains while building a foundation for enduring excellence and social sustainability. Our findings and recommendations aim to inform that ongoing conversation and to support collective efforts towards ethically sound, effective innovations in teaching and learning.