1. Introduction
The rapid expansion of artificial intelligence (AI) and digitalization has become one of the most influential transformations shaping contemporary education worldwide. Digital technologies increasingly mediate teaching, learning, assessment, and institutional governance, while AI-driven systems are being integrated into classrooms, online platforms, and educational management structures. Within global policy agendas—particularly those linked to the United Nations Sustainable Development Goals—education is widely framed as a key driver of sustainable development, with Sustainable Development Goal 4 emphasizing inclusive, equitable, and quality education for all [
1,
2,
3,
4,
5]. In this context, AI is often presented as a technological catalyst capable of modernizing education systems and aligning them with sustainability objectives [
2,
6].
Despite this optimistic framing, the growing reliance on AI and digitalization raises fundamental questions about what “sustainability” should mean in educational contexts. Technological innovation is frequently equated with progress, efficiency, and improvement, yet such assumptions risk obscuring the normative and human dimensions of education [
7,
8]. Education is not merely a technical system to be optimized, but a value-laden social institution concerned with human development, equity, and ethical responsibility. As AI-driven technologies increasingly shape educational practices and policies, a central research problem emerges: whether AI adoption genuinely supports sustainable education as a value-based and human-centered project, or whether it accelerates processes of technologization and commodification under market-driven logics [
9,
10,
11,
12].
A useful entry point for conceptual clarification in this debate is offered by Alam, who distinguishes analytically between sustainable education, sustainability in education, and education for sustainable development [
13,
14]. Whereas sustainability in education often emphasizes institutional durability, efficiency, and system continuity, sustainable education is framed as a value-based and human-centered project oriented toward social justice, equity, and long-term ethical aims. Education for sustainable development, by contrast, typically focuses on curricular and pedagogical alignment with sustainability-related goals. Alam argues that failing to maintain these distinctions has generated conceptual ambiguity in educational research and policy, allowing technologization to be treated as a proxy for sustainability rather than as a means that must itself be critically evaluated [
13,
14]. At the same time, concerns about instrumentalism and the reduction of education to measurable outputs are also central to broader educational theory, which warns that measurement-driven reform can displace democratic, ethical, and formative purposes [
7,
8].
Within the existing research field, a substantial body of literature highlights the enabling potential of AI in education. Empirical studies and systematic reviews suggest that AI-powered learning systems can enhance personalization, adaptive feedback, learner engagement, and institutional efficiency when implemented under favorable conditions [
1,
15,
16,
17]. From this perspective, AI is frequently presented as a tool capable of improving educational quality and supporting sustainability goals, particularly in well-resourced contexts with strong governance structures [
2,
15,
18]. These studies often emphasize AI’s capacity to optimize learning processes, reduce administrative burdens, and expand access to educational opportunities [
19,
20].
In contrast, a growing body of critical scholarship challenges the assumption that AI-driven education is inherently sustainable. Analyses of digital transformation and internationalization in higher education demonstrate how technological expansion is often embedded within market-oriented governance frameworks that prioritize competitiveness, efficiency, and profitability [
9,
10,
12,
14]. From this perspective, AI may reinforce the commodification of education by transforming learning processes, institutional practices, and student data into economic assets [
11]. This line of research underscores a fundamental controversy: whether AI functions as an instrument for educational empowerment or as a mechanism that sustains market-driven models at the expense of educational values, relational pedagogy, and meaningful learning [
7,
14,
21].
Educational equity and ethics constitute central points of contention in debates on AI and sustainable education. Research indicates that students from low-income, rural, and marginalized backgrounds often benefit less from AI-based educational technologies due to disparities in digital infrastructure, access to devices, and institutional capacity [
16,
22,
23,
24]. Rather than functioning as equalizers, AI systems may reproduce or intensify existing inequalities [
18,
25]. Ethical concerns further complicate this landscape, including issues related to data privacy, surveillance, algorithmic bias, transparency, and academic integrity [
19,
20,
26,
27,
28,
29,
30]. These challenges raise serious questions about whether education mediated by AI can remain aligned with the ethical foundations of sustainability.
The tensions surrounding AI and sustainable education become particularly visible in culturally and religiously grounded educational contexts, where education is closely linked to value formation, epistemic authority, and moral development. Studies in Islamic and value-based educational settings suggest that AI integration may commodify knowledge, weaken traditional forms of scholarly authority, and reshape the meaning and purpose of education itself [
3,
28,
31,
32]. These contexts reveal that sustainability cannot be assessed solely through technical performance indicators, but must be evaluated in relation to cultural coherence, ethical commitments, and the human purposes of learning. Such perspectives remain underrepresented in mainstream AI-in-education research, despite their analytical significance.
It should be clarified at the outset that references to culturally and religiously grounded educational contexts in this study do not aim to introduce exceptional or context-bound cases. Rather, such contexts are analytically employed as sensitivity amplifiers, in which questions of educational purpose, epistemic authority, moral formation, and governance are articulated with greater normative clarity. By examining AI-mediated education in settings characterized by high value density, the study seeks to render visible conditional mechanisms that may remain less explicit in more technocratically framed educational environments.
To avoid treating such contexts as analytically exceptional or marginal, this study approaches culturally and religiously grounded educational settings as illustrative cases within a broader conditional framework. These contexts are not introduced as separate empirical domains, but as normatively dense environments in which tensions related to educational purpose, epistemic authority, governance, and commodification become more explicit. In this sense, references to Islamic and value-based educational contexts function as analytical amplifiers that help clarify the boundary conditions under which AI aligns with—or undermines—sustainable education as a value-based project.
Against this background, the present study examines sustainable education in the age of artificial intelligence and digitalization through a value-critical analytical approach that uses Alam’s distinctions as a heuristic entry point and places them in dialogue with human-centered educational theory, critical educational technology scholarship, and AI governance and policy frameworks [
3,
4,
5,
7,
8,
9,
10,
11,
13,
14,
21,
33,
34,
35]. The main aim is to assess whether—and under what conditions—AI can support sustainable education as a human-centered and value-based project rather than reducing education to a technologically optimized and commodified service. Accordingly, this study addresses the following research question:
Under what conditions does artificial intelligence contribute to sustainable education as a value-based and human-centered project, and under what conditions does it undermine it?
It is important to clarify that this study does not seek to provide a systematic literature review, bibliometric analysis, or empirical evaluation of specific AI applications in educational settings. Instead, it adopts a qualitative, conceptual, and value-critical analytical approach aimed at examining the normative assumptions, ethical tensions, and governance conditions that shape contemporary debates on AI, digitalization, and sustainable education. The focus of the analysis is therefore interpretive and theoretical rather than empirical or intervention-based.
This article is structured as follows.
Section 2 reviews the relevant literature and theoretical background on sustainable education and AI-driven digitalization.
Section 3 outlines the conceptual framework and analytical approach guiding the study.
Section 4 explains the research design and analytical procedure.
Section 5 presents the main analytical results, and
Section 6 discusses their theoretical and policy implications. The article concludes by summarizing the findings and outlining directions for future research.
3. Conceptual Framework and Analytical Approach
3.1. Conceptual Foundations of the Study
This study adopts a value-critical conceptual framework to examine sustainable education in the age of artificial intelligence (AI) and digitalization. Rather than presupposing a single theoretical model, the framework is grounded in a dialogical engagement with complementary perspectives drawn from educational theory, ethics of education, and critical educational technology research. Within this plural theoretical landscape, the analytical distinctions proposed by Alam between sustainable education, sustainability in education, and education for sustainable development are employed as a heuristic and comparative reference, rather than as a self-validating or exhaustive framework [
13,
14]. These distinctions are analytically useful insofar as they illuminate recurring conceptual ambiguities in contemporary educational discourse, particularly the tendency to conflate technological efficiency, institutional resilience, and policy compliance with educational sustainability.
From a comparative perspective, sustainability in education is commonly associated with system-oriented approaches that prioritize institutional continuity, efficiency, and adaptability under conditions of technological and economic change. Such orientations resonate with managerial and policy-driven models of educational reform, yet have been widely criticized for reducing education to measurable outputs and functional performance. Education for sustainable development, by contrast, is typically framed around curricular and pedagogical strategies aimed at fostering awareness of environmental, social, and developmental goals, often emphasizing competencies and learning outcomes aligned with global sustainability agendas. In contrast to both orientations, sustainable education represents a more demanding normative position that situates education as a value-based and human-centered social practice oriented toward long-term intellectual, ethical, and social flourishing.
This normative understanding intersects with, and is reinforced by, broader critiques of instrumentalism and measurement-driven educational reform articulated in human-centered educational theory and critical pedagogy. Scholars such as Biesta have emphasized that education cannot be reduced to efficiency, skills acquisition, or performance indicators without undermining its ethical, democratic, and formative purposes. Similarly, critical educational technology scholarship, particularly the work of Selwyn, challenges techno-solutionist narratives and highlights the social, political, and ideological limits of AI-driven educational innovation. These perspectives converge with Alam’s critique of system-centered sustainability, while extending it by foregrounding lived educational practices, power relations, and the normative implications of technological adoption.
In addition, analyses of datafication and platform governance in education, most notably advanced by Williamson, further complicate sustainability-oriented frameworks by exposing how AI-driven infrastructures reshape educational governance, epistemic authority, and policy decision-making. From this viewpoint, sustainability claims are not merely conceptual distinctions, but are operationalized through data regimes, platform economies, and regulatory arrangements that may entrench commodification and asymmetries of power. Complementing these theoretical critiques, normative policy frameworks developed by international organizations such as UNESCO and the OECD articulate principles of ethical AI, equity, and human-centered digital transformation. While these frameworks share a concern for values and long-term educational goals, they differ in their pragmatic orientation toward governance guidelines, risk mitigation, and institutional implementation.
On this basis, the study adopts sustainable education as its primary analytical lens, while explicitly situating it in dialogue with these wider strands of educational and edtech scholarship. This positioning enables a critical evaluation of AI not merely in terms of functional performance, innovation capacity, or scalability, but in relation to fundamental educational values such as equity, human dignity, epistemic integrity, and social responsibility. By treating conceptual distinctions as analytical tools rather than presupposed truths, the framework avoids circular reasoning and remains open to competing interpretations of sustainability, educational purpose, and governance in AI-mediated education.
3.2. Artificial Intelligence as a Socio-Technical and Normative System
Rather than treating AI as a neutral technological tool, this study conceptualizes AI as a socio-technical system embedded within specific economic, institutional, and cultural contexts. AI-driven educational technologies shape not only instructional practices, but also governance structures, assessment regimes, and forms of epistemic authority. As Alam notes, technological systems introduced within market-driven and internationalized educational models tend to reflect and reinforce prevailing power relations and economic priorities [
14]. This insight is consistent with broader critical perspectives in educational technology and AI governance, which emphasize that technological infrastructures are never value-neutral but are shaped by policy agendas, commercial interests, and institutional logics.
From this perspective, AI integration in education entails normative consequences that extend well beyond technical efficiency. Decisions concerning data collection, algorithmic design, platform governance, and institutional adoption implicitly encode values related to control, accountability, transparency, and the commodification of knowledge. The conceptual framework therefore rejects technologically deterministic narratives and instead emphasizes the contextual, political, and value-laden nature of AI-mediated education.
3.3. Analytical Dimensions
To operationalize this conceptual framework, the study analyzes AI and digitalization in education across four interrelated analytical dimensions:
- (1)
Educational Purpose and Meaning.
This dimension examines how AI reshapes prevailing conceptions of educational purpose, including tensions between education as a process of human formation and education as a system optimized for performance, efficiency, and measurable outputs.
- (2)
Equity and Social Justice.
This dimension assesses whether AI integration contributes to or undermines educational equity by examining issues related to access, digital divides, and the differential impact of AI systems across social, institutional, and geopolitical contexts.
- (3)
Ethical and Epistemic Integrity.
Here, the analysis focuses on data ethics, algorithmic bias, academic integrity, and transformations in epistemic authority within AI-mediated learning environments, including questions of authorship, assessment, and the credibility of knowledge production.
- (4)
Governance and Commodification.
This dimension evaluates the role of market logic, platform capitalism, and institutional governance in shaping AI adoption, critically examining whether AI reinforces the commodification of education or can be subordinated to public-oriented and value-based educational goals.
Together, these dimensions provide an integrated analytical structure through which the sustainability of AI-driven education can be assessed beyond purely technical or instrumental criteria.
3.4. Analytical Approach
Methodologically, the study employs a qualitative critical analysis of contemporary academic literature, policy-oriented discussions, and conceptual debates on AI, digitalization, and sustainable education. Rather than aggregating empirical findings, the analysis synthesizes interdisciplinary insights to identify underlying assumptions, normative tensions, and conceptual gaps that shape current debates.
This analytical approach is explicitly interpretive and normative. It does not seek to measure the effectiveness of specific AI applications, but to evaluate the conditions under which AI aligns with, or undermines, sustainable education as a value-based project. By integrating conceptual analysis with critical interpretation, the study aims to illuminate how AI may be reoriented toward ethical, equitable, and human-centered educational futures.
3.5. Positioning of the Study
By combining Alam’s conceptual distinctions with a multidimensional and theoretically plural critical analysis of AI in education, this framework positions the study at the intersection of educational theory, ethics, and digital transformation. It contributes to existing scholarship by shifting the analytical focus from technological capability to normative alignment, emphasizing that sustainable education in the digital age depends not on AI adoption per se, but on the ethical, institutional, and value-based frameworks that govern its design, deployment, and use.
4. Materials and Methods
Although this study draws on a broad body of contemporary scholarly literature, it does not adopt a systematic literature review or meta-analytic design. Rather, it employs a qualitative, conceptual, and value-critical analytical approach. The description of databases, search terms, and inclusion parameters serves a scoping and delimiting function, aimed at clarifying the contours of the literature landscape consulted, rather than constituting an analytical protocol in itself. Accordingly, the study’s analytical core lies not in the mechanics of literature retrieval, but in the conceptual examination, deconstruction, and reconstruction of key educational and normative concepts related to artificial intelligence and sustainability.
4.1. Research Design
From a methodological standpoint, this study is situated at a macro-level of depth and scope within the research pyramid, as it addresses conceptual, normative, and governance-related questions rather than empirical or intervention-based problems. In terms of type, it constitutes qualitative theoretical research. With respect to its research approach, the study is grounded in interpretive and critical research paradigms, which are appropriate for examining educational meaning, ethical assumptions, and value-laden dimensions of sustainability in education.
Regarding its purpose, the study is explanatory and normative, aiming to clarify conceptual distinctions and critically assess the conditions under which artificial intelligence (AI) contributes to, or undermines, sustainable education as a value-based and human-centered project. The type of inference employed is primarily abductive and analytical, allowing the study to move iteratively between existing literature, conceptual distinctions, and normative interpretation. In terms of data sources, the research relies on secondary data, consisting of peer-reviewed academic literature and policy-oriented normative documents. The time dimension of the study is cross-sectional and contemporary, focusing on recent developments in AI, digitalization, and sustainable education. Finally, the overall research design is best characterized as a structured qualitative critical analysis, integrating conceptual mapping, thematic analysis, and normative evaluation.
Within this methodological classification, the study adopts a qualitative, conceptual, and critical research design aimed at examining sustainable education in the age of artificial intelligence (AI) and digitalization from a value-based and normative perspective. Rather than producing empirical measurements or experimental outcomes, the research focuses on analyzing underlying concepts, theoretical frameworks, normative assumptions, and ethical tensions that shape contemporary debates on AI-driven education.
This design is particularly appropriate for addressing questions related to educational purpose, social justice, human dignity, and sustainability, which cannot be adequately captured through purely quantitative or intervention-based methodologies. Accordingly, the study is situated within established interpretive and critical traditions of educational research, emphasizing conceptual clarity, theoretical coherence, and normative evaluation.
4.2. Materials and Data Sources
The materials analyzed in this study consist of peer-reviewed academic literature, including conceptual, theoretical, and selected empirical studies published in international journals. The literature search was conducted using Scopus, Web of Science, ERIC, and Google Scholar, with a primary focus on publications from 2015 to 2024, in order to capture recent developments in artificial intelligence, digitalization, and sustainable education.
Key search terms were used in various combinations and included artificial intelligence in education, digitalization of education, sustainable education, education for sustainable development, educational commodification, AI ethics in education, and value-based education. These terms were selected to ensure coverage of technological, ethical, governance-related, and educational dimensions relevant to the analytical framework of the study.
Sources were included if they (a) addressed AI or digital technologies in educational contexts; (b) engaged explicitly with sustainability, ethics, or educational values; and (c) contributed conceptually or analytically to debates on educational purpose, equity, epistemic integrity, or governance. Sources were excluded if they were purely technical, commercially oriented, or lacked relevance to ethical, educational, or sustainability-related dimensions.
In addition to academic literature, the analysis draws on policy-oriented documents and normative frameworks related to sustainable development, educational equity, and digital transformation, including reports issued by international organizations. All materials analyzed are publicly available, ensuring transparency and analytical traceability.
4.3. Analytical Procedure
The analysis was conducted through a structured qualitative critical procedure guided by an explicit analytical framework. This framework is organized around four interrelated analytical dimensions—educational purpose and meaning; equity and social justice; ethical and epistemic integrity; and governance and commodification—which provided the conceptual structure through which the selected literature was systematically examined and interpreted.
The overall analytical flow followed in the study is illustrated schematically in
Figure 1.
The analytical process followed a concept-driven and theoretically guided sequence. First, core concepts central to contemporary debates on AI and sustainable education—such as sustainability, educational purpose, efficiency, equity, and governance—were identified through a critical reading of the literature. Second, these concepts were examined in relation to their normative tensions and implicit value assumptions. Third, recurrent conceptual patterns were clustered into analytically meaningful themes. Finally, these themes were synthesized and reconstructed into a set of analytical propositions that articulate the conditional mechanisms through which AI may either support or undermine sustainable education. Throughout this process, interpretive judgment was exercised reflexively and transparently, in line with established practices in qualitative conceptual and value-critical analysis.
4.3.1. Conceptual Mapping
The first stage of analysis involved conceptual mapping through the systematic identification of key concepts across the selected literature. Core concepts such as sustainable education, sustainability in education, digitalization, commodification, ethical governance, equity, and educational purpose were identified and comparatively examined in order to analyze how they are defined, operationalized, and related to one another across different theoretical, empirical, and policy-oriented contexts. This stage enabled the clarification of conceptual boundaries, as well as the identification of tensions and overlaps among competing interpretations within debates on AI-driven educational transformation.
4.3.2. Thematic Analysis
In the second stage, the reviewed literature was organized and examined according to the predefined analytical dimensions of the study. Within each dimension, recurring themes, dominant narratives, and critical points of divergence concerning the role of artificial intelligence in education were identified through iterative reading and systematic cross-comparison of sources. Particular attention was given to tensions between enabling and critical perspectives, especially in relation to access, equity, marketization processes, and governance conditions. Themes were derived analytically through their alignment with the study’s conceptual framework rather than generated inductively in isolation, thereby ensuring coherence and transparency in the analytical process. These analytical dimensions and their associated illustrative themes are summarized in
Table 1.
4.3.3. Normative Evaluation
In the final stage, the outcomes of conceptual mapping and thematic analysis were subjected to normative evaluation grounded in value-based educational theory and ethical frameworks. This stage critically examined the implications of AI integration for human dignity, epistemic integrity, educational meaning, and social justice. Normative evaluation allowed the synthesis of analytically grounded findings and facilitated the articulation of broader ethical and educational implications, moving beyond descriptive reporting toward critical interpretation.
The findings presented in
Section 5 (Results) are organized in accordance with the same analytical dimensions employed throughout the analytical procedure. This alignment ensures a transparent and traceable analytical chain linking literature selection, conceptual framing, thematic synthesis, and normative evaluation. Consequently, the results do not emerge from impressionistic interpretation, but are systematically derived from the predefined analytical framework guiding the study.
4.4. Ethical Considerations
This research does not involve human participants, animals, personal data, or experimental interventions. Consequently, ethical approval from an institutional review board was not required. Nevertheless, the study adheres to established principles of academic integrity, including accurate representation of sources, critical engagement with existing scholarship, and transparency in methodological choices.
4.5. Use of Generative Artificial Intelligence
Generative artificial intelligence tools were used solely for language refinement and editorial assistance, including grammar checking, stylistic improvement, and formatting support. Generative AI was not used to generate original data, analytical arguments, conceptual frameworks, or interpretive conclusions. All intellectual content, analytical judgments, and normative positions presented in this article are the sole responsibility of the authors.
5. Results
This section presents the analytical results of the study by directly answering the central research question: under what conditions does artificial intelligence contribute to sustainable education as a value-based and human-centered project, and under what conditions does it undermine it?
The analysis demonstrates that artificial intelligence does not exert a uniform, inherently positive, or inherently negative influence on educational sustainability. Rather, its effects are conditional and governance-mediated. AI contributes to sustainable education when its adoption is subordinated to explicit educational purposes, ethical constraints, equity-oriented governance, and the protection of epistemic integrity. Conversely, when AI is embedded within efficiency-driven, data-centric, and weakly regulated environments, it tends to reinforce instrumentalism, commodification, and educational inequality.
The overall analytical logic of these results is synthesized in
Figure 2, which presents a conditional model of AI and sustainable education.
These results are articulated below as analytically derived propositions that synthesize the study’s value-critical examination across four interrelated analytical dimensions: educational purpose and meaning; equity and social justice; ethical and epistemic integrity; and governance and commodification. Together, these propositions form an integrated explanatory account rather than a descriptive restatement of existing scholarship [
3,
13,
14,
33].
5.1. Analytical Result 1: The “Efficiency–Sustainability Substitution” Mechanism
A central analytical result is that AI-driven educational reforms frequently operate through an efficiency–sustainability substitution mechanism, whereby “sustainability” is implicitly redefined as operational performance—such as scalability, optimization, and measurable outputs—rather than as a value-based educational project. Under this mechanism, AI is framed as a sustainability instrument primarily because it delivers efficiency gains, even when these gains are not normatively aligned with sustainable education in its human-centered sense [
7,
8,
13,
14].
Crucially, this substitution does not require explicit market rhetoric. It often emerges through policy and institutional discourses that equate “modernization,” “innovation,” or “digital transformation” with sustainability. As a result, sustainability claims become increasingly metric-dependent: educational improvement is recognized mainly through what can be measured, automated, and benchmarked. This process risks displacing formative educational purposes related to meaning, subject formation, and democratic responsibility [
7,
8,
21,
33].
The analysis indicates that this substitution mechanism is weakened when AI adoption is guided by explicit educational aims and ethical priorities that resist reduction to performance indicators—such as human dignity, social justice, and epistemic integrity—and when educators retain a central role in shaping AI-mediated practices [
8,
13,
33].
This mechanism is consistent with empirical accounts indicating that AI-enabled reforms are frequently operationalized through performance metrics, automated monitoring, and efficiency-oriented implementation in institutional practice, especially where accountability is dashboard-driven or market-aligned [
15,
18,
26].
In these documented settings, sustainability discourse is commonly translated into operational indicators such as completion rates, engagement dashboards, predictive learning analytics, and productivity benchmarks. As a result, educational success is increasingly recognized through what can be measured, automated, and optimized, illustrating how efficiency-oriented AI applications may substitute value-based educational aims without explicit reference to marketization or commercial intent [
15,
18,
26].
5.2. Analytical Result 2: The “Datafication-to-Commodification” Governance Pathway
A second major result identifies a governance pathway through which AI-driven digitalization can shift education from a value-based practice toward a commodified service: datafication → platform governance → market-aligned accountability → commodification [
10,
11,
12,
35,
36,
40].
AI systems intensify the production of educational data, including behavioral traces, performance analytics, and predictive profiling. These data streams reshape institutional decision-making by reconfiguring accountability around dashboards, ranking logics, and comparative metrics. Over time, educational value becomes increasingly expressed through data outputs compatible with market coordination—efficiency, competition, and standardization—thereby converting learning processes and student data into strategic assets [
10,
11,
26,
31,
38].
The analysis shows that this pathway is not technologically inevitable but institutionally mediated. It is constrained when educational institutions adopt robust public-oriented governance frameworks, including transparent data governance, limits on extractive data practices, accountability mechanisms for algorithmic decision-making, and policies that explicitly subordinate platform logic to educational ethics [
5,
34,
41,
42].
Empirical and policy-oriented studies on data governance and academic integrity in AI-mediated education further indicate that datafication practices can reshape institutional accountability and learner behavior in ways that intensify commodification pressures and integrity risks [
25,
26].
These studies describe how learning analytics dashboards, predictive risk scoring systems, and platform-based accountability mechanisms increasingly mediate institutional decision-making. In doing so, AI-driven data infrastructures reframe learning processes, student behavior, and educational outcomes as governable and comparable data objects, thereby facilitating the transformation of educational activity and learner data into exchangeable and strategically managed asset [
25,
26].
5.3. Analytical Result 3: Normative Friction, Equity, and Epistemic Integrity as Interdependent Conditions
The analysis further demonstrates that sustainable AI adoption in education depends on the interaction of three interdependent conditions: normative friction, governance-based equity, and epistemic integrity. Rather than functioning as isolated variables, these conditions jointly determine whether AI supports or undermines sustainable education.
First, the presence of normative friction—ethical constraints, deliberative safeguards, and institutional oversight—emerges as a constitutive condition of sustainability. Where normative friction is absent, AI adoption tends to proceed along managerial rationalities (“what works,” “what scales”), and sustainability becomes synonymous with continuity and efficiency. Where normative friction is present, AI is more likely to remain an instrument supporting educational purposes rather than restructuring those purposes [
3,
5,
33,
34,
42].
Second, the results indicate that equity outcomes under AI are best explained as a function of governance capacity rather than technological availability. AI does not generate equity autonomously; it amplifies existing institutional and structural conditions. Well-governed and well-resourced contexts can translate AI adoption into improved access and support, whereas weakly regulated and under-resourced contexts tend to experience intensified inequalities through differential access, uneven oversight, and heightened vulnerability to bias and exclusion [
9,
15,
16,
17,
18,
25]. Under such conditions, AI-mediated systems may reproduce disadvantages in opaque and technically mediated ways [
25,
27,
28,
30,
43].
Third, the analysis identifies epistemic integrity as a “hidden variable” of sustainability. AI-mediated education affects not only learning efficiency but also the credibility, authorship, and authority structures through which knowledge is produced and validated. Automation of feedback and assessment may reduce spaces for critical reasoning; generative tools introduce ambiguities around authorship and originality; and algorithmic mediation can shift epistemic authority away from educators and scholarly communities toward opaque or commercially governed systems [
23,
24,
29,
34,
37]. These dynamics threaten sustainable education insofar as sustainability is grounded in long-term intellectual and moral development rather than short-term performance [
7,
8,
33].
The analysis indicates that equity-supportive and epistemically robust outcomes are contingent upon deliberate governance choices, including equity-oriented design, institutional accountability, bias mitigation, and explicit norms of academic integrity [
3,
18,
20,
25,
29,
34].
5.4. Analytical Result 4: Cultural and Value-Based Contexts as Sensitivity Amplifiers
Within the proposed conditional model, culturally and religiously grounded educational contexts are not approached as distinct empirical cases, but as analytically revealing environments that function as sensitivity amplifiers. In such settings, the integration of artificial intelligence engages more directly with questions of educational purpose, epistemic authority, and moral responsibility, thereby making visible conditional dynamics that are structurally present across educational systems but often remain less explicit elsewhere.
Because education in these contexts is understood not merely as a service, but as a value-transmitting and morally charged practice, AI adoption can have disproportionate effects on the perceived authority of educators and the normative status of knowledge itself [
3,
28,
31,
32,
41]. These effects help explain why universal or “one-size-fits-all” sustainability claims are analytically weak: the same AI system may be experienced as enabling in one context and as corrosive in another, depending on how educational authority, legitimacy, and moral formation are socially organized.
Accordingly, sustainable integration of AI in value-based educational contexts requires explicit alignment between technological use and educational aims, including safeguards against the commodification of knowledge and the displacement of relational and formative pedagogical practices.
5.5. Integrated Result: A Conditional Model of AI and Sustainable Education
Synthesizing these analytical results yields a conditional explanatory model: AI supports sustainable education only when
- (i)
sustainability is not substituted by efficiency metrics,
- (ii)
datafication is governed by public-oriented and ethical accountability,
- (iii)
normative friction constrains instrumental adoption,
- (iv)
equity is treated as a governance responsibility,
- (v)
epistemic integrity is actively protected—particularly in value-based contexts where moral formation and authority structures are central [
3,
13,
14,
33,
42].
Accordingly, the results move beyond the generic claim that AI has “benefits and risks”. The distinctive contribution lies in specifying mechanisms, boundary conditions, and governance pathways through which AI shapes educational sustainability in divergent and context-dependent ways. These integrated analytical results, mechanisms, and boundary conditions are summarized in
Table 2.
6. Discussion
This section advances the analytical results by situating them within broader theoretical, policy, and institutional debates on artificial intelligence (AI), digitalization, and sustainable education. Rather than reiterating the findings, the discussion elevates them into a conditional explanatory account that clarifies how and under what governance conditions AI contributes to—or undermines—the normative aims of sustainable education. In doing so, the section translates value-oriented concepts into operational implications grounded in concrete educational contexts and policy frameworks.
6.1. From Dual Effects to a Conditional Model of AI in Sustainable Education
The findings indicate that AI does not exert a uniform or inherently progressive influence on education. Instead, its effects are context-dependent and governance-mediated, confirming that AI functions as an amplifier of prevailing institutional logics rather than as an autonomous driver of educational transformation. This observation aligns with critical scholarship rejecting technological determinism and emphasizing the role of organizational priorities, regulatory capacity, and normative orientation in shaping educational outcomes.
At a higher level of abstraction, this leads to a conditional model: AI supports sustainable education only when its deployment is embedded within governance arrangements that preserve educational purpose, constrain extractive data practices, and maintain human oversight in pedagogical decision-making. Where such conditions are absent, AI-driven digitalization tends to reinforce instrumental and performance-oriented models that prioritize efficiency and scalability over formative educational goals.
6.2. Distinguishing System Sustainability from Sustainable Education
A central theoretical contribution of this study lies in clarifying the distinction between sustainability in education and sustainable education. Many AI-enabled outcomes documented in contemporary reforms—such as administrative automation, personalized content delivery, and data-driven performance optimization—enhance the operational sustainability of educational systems. However, these outcomes do not necessarily advance sustainable education as a normative project concerned with human development, social responsibility, and long-term epistemic integrity.
This distinction helps explain why AI initiatives may simultaneously improve measurable outputs while generating concerns about depersonalization, commodification, and value erosion. Treating efficiency gains as indicators of sustainability risks collapsing normative educational aims into managerial performance criteria. The discussion therefore reframes sustainability not as a technical achievement, but as a value-dependent educational orientation that must be actively protected through governance and policy design.
6.3. Commodification and the Role of Institutional Governance
The findings indicate that artificial intelligence does not introduce commodification into education ex nihilo; rather, it intensifies and operationalizes pre-existing market logics through distinct technological mechanisms. In higher and transnational education systems, AI accelerates commodification by enabling large-scale data extraction, performance benchmarking, and platform-based competition, thereby aligning educational value with metrics of efficiency, visibility, and market responsiveness.
This dynamic becomes particularly evident when examining specific AI applications. In the case of generative AI systems (e.g., large language models used for writing support, assessment, and feedback), educational practices risk being reconfigured around the automation of cognition itself. Writing, evaluation, and knowledge production increasingly appear as standardized services that can be generated, optimized, and consumed on demand. This transformation reframes learning as a transactional output rather than a formative process, raising ethical concerns related to authorship, originality, and responsibility for knowledge production.
Similarly, adaptive learning systems contribute to commodification through algorithmically generated learning pathways that rely on historical performance data and predictive analytics. While such systems promise personalization, they also introduce risks of algorithmic bias by reproducing existing inequalities embedded in training data. Learners may be silently steered toward predefined educational trajectories, limiting epistemic agency and reinforcing stratified outcomes under the guise of optimization and efficiency.
The logic of commodification is further reinforced through learning analytics and AI-driven automation within learning management systems. Dashboards, performance indicators, and predictive risk scores increasingly shape institutional decision-making, transforming students into data profiles and educational success into measurable outputs. In these contexts, governance opacity becomes a central ethical issue: pedagogical judgments are partially displaced by automated metrics that are often inaccessible to critical scrutiny by educators and learners alike.
From a governance perspective, these technology-specific mechanisms demonstrate that ethical risks associated with AI—such as data commodification, algorithmic bias, and surveillance-driven assessment—cannot be addressed through technical design alone. They require institutional and regulatory interventions capable of subordinating AI infrastructures to public educational values rather than market imperatives. Policy-oriented approaches emphasizing transparency, accountability, data protection, and value-sensitive design provide essential tools for counteracting the commodifying tendencies embedded in AI-mediated educational systems.
Taken together, this analysis reframes commodification not as an abstract ethical concern, but as a technology-mediated process that unfolds differently across generative AI, adaptive learning systems, and learning analytics. Sustainable education in the age of AI therefore depends on governance frameworks that recognize these distinctions and actively regulate how specific AI applications reshape educational purposes, practices, and power relations.
6.4. Equity, Ethics, and Epistemic Integrity in Concrete Contexts
The discussion of equity and ethics gains analytical depth when situated within concrete educational contexts. Recent global and regional studies show that generative AI tools are deployed across educational levels—from primary schooling to higher education and vocational training—yet their benefits remain unevenly distributed. Disparities in digital infrastructure, institutional capacity, and regulatory oversight mean that AI adoption often reproduces or amplifies existing inequalities rather than alleviating them.
Within this discussion, culturally and religiously grounded educational contexts are not treated as separate cases, but as analytically revealing environments in which issues of governance, epistemic authority, and ethical accountability become structurally visible. Their inclusion thus serves to test the explanatory reach of the proposed conditional model rather than to introduce a new thematic direction.
Importantly, the explanatory validity of the conditional model advanced in this study does not depend on any specific cultural or religious setting. Rather, the inclusion of value-dense educational contexts serves to clarify and stress-test the model’s underlying assumptions, thereby strengthening—rather than constraining—its broader analytical and normative applicability.
Higher education contexts illustrate this tension clearly. AI-driven automation within learning management systems—such as automated grading, feedback generation, and administrative streamlining—can improve accessibility and efficiency, including in public university systems. However, without robust governance, these same tools raise questions about academic integrity, authorship, and the erosion of educators’ epistemic authority. These challenges are particularly salient in value-based and culturally grounded educational settings, where education is inseparable from moral responsibility and the cultivation of judgment.
Accordingly, ethical principles such as fairness, dignity, and integrity acquire practical meaning only when translated into governance requirements: data protection frameworks, algorithmic bias audits, transparent accountability structures, and sustained professional development for educators. The findings thus support approaches that treat ethics not as an external constraint on innovation, but as an internal condition for sustainable educational practice.
6.5. Policy and Practice Implications
At the policy level, the analysis suggests that sustainable education in the digital age cannot be achieved through innovation strategies alone. Educational policies must move beyond a narrow focus on competitiveness and technological adoption toward governance-oriented frameworks that prioritize equity, human-centered pedagogy, and epistemic responsibility. This includes explicit standards for data governance, mechanisms for auditing algorithmic decision-making, and investments aimed at reducing digital divides across institutions and learner populations.
For educational practice, the discussion reinforces the importance of positioning AI as a supportive instrument rather than a substitutive authority. Educators and institutions remain central agents in shaping how AI is integrated into teaching, assessment, and learning design. Sustainable practice requires preserving reflective pedagogy, fostering critical engagement with AI-generated outputs, and maintaining meaningful human relationships at the core of educational processes.
6.6. Limitations and Directions for Future Research
As a qualitative and conceptual inquiry, this study does not provide empirical measurement of AI’s effects across specific institutional or national settings. Future research could extend the proposed conditional model through comparative case studies, policy analyses, or mixed-method investigations examining how value-based AI governance is enacted in practice.
Further work is particularly needed in culturally and religiously grounded educational contexts, where questions of epistemic authority, moral formation, and technological mediation intersect in distinctive ways. Such research would contribute to more context-sensitive and policy-relevant models of sustainable education under conditions of rapid digital transformation.
7. Conclusions
This study has examined sustainable education in the age of artificial intelligence (AI) and digitalization through a value-critical and condition-oriented analytical framework. By conceptually distinguishing between sustainable education, sustainability in education, and education for sustainable development, the article has shown that contemporary debates on AI in education often collapse normative educational aims into metrics of technological efficiency, thereby obscuring the ethical, epistemic, and governance conditions upon which educational sustainability depends.
The analysis demonstrates that AI does not exert a uniform or intrinsic influence on educational systems. Rather, its educational effects are contingent upon specific institutional, regulatory, and value-based contexts. Under conditions characterized by ethical oversight, public-oriented governance, and human-centered pedagogical integration, AI can support access, personalization, and instructional enhancement. Conversely, in contexts shaped by weak regulation, market-driven accountability, and data-centric performance regimes, AI tends to intensify processes of commodification, inequality, and epistemic erosion.
The central contribution of this study lies in reframing AI and digitalization as mediated instruments within sustainable education rather than as self-justifying solutions. Sustainable education in the digital age emerges not from technological adoption per se, but from the subordination of AI to clearly articulated educational purposes grounded in human dignity, social justice, and epistemic integrity. By advancing a conditional and value-oriented analytical model, the study challenges technocratic and instrumental narratives that treat innovation as a sufficient or neutral pathway to sustainability.
At the policy and institutional level, the findings highlight the necessity of governance frameworks that align AI deployment with ethical accountability, equity considerations, and the preservation of meaningful pedagogical relationships. Educational technologies should be designed and regulated to support reflective learning, critical reasoning, and moral formation, rather than reducing education to data-driven optimization and performative efficiency.
Finally, this study underscores the need for continued interdisciplinary research that integrates empirical investigation with normative, cultural, and ethical analysis. Future research that combines context-sensitive empirical evidence with value-critical frameworks will be essential for developing robust models of sustainable education capable of responding to the complex and uneven effects of AI-driven digitalization across diverse educational settings.