1. Introduction and Opportunities
Artificial Intelligence in Education (AIED) has rapidly evolved into a field that blends computational innovation with pedagogical inquiry, aiming to improve learning processes while raising critical ethical and methodological questions. Yet, AI does not stand alone in shaping future classrooms. A parallel trajectory is found in the rise of
educational robotics, which offers hands-on, embodied experiences that cultivate computational thinking, problem solving, and creativity. Educational robotics has grown from early experiments such as Seymour Papert’s Turtle robot into a global industry and a vibrant research area spanning STEM education, language learning, and social development (
Evripidou et al., 2020). This convergence of AI and robotics underscores the interdisciplinary nature of contemporary educational innovation, where digital and physical agents jointly redefine teaching and learning practices.
AIED research itself has matured into distinct thematic clusters, including foundations of AI, classroom applications, explainability, AI literacy, and the rise of generative AI. Intelligent tutoring systems and personalized learning platforms demonstrate how machine learning and natural language processing can enhance differentiated instruction. Learning analytics connects data-driven insights to pedagogy, offering teachers actionable feedback on learning trajectories (
Siemens, 2013). Evidence from research on teacher preparation in ICT integration further highlights the importance of professional readiness and targeted training, lessons that are equally critical when adopting AI tools in education (
Tondeur et al., 2012). The pace and breadth of this progress call for a journal that integrates pedagogical, ethical, and technical perspectives in one common forum.
2. Aims and Scope of the Journal
AI in Education aspires to be more than a repository of research papers. It is envisioned as a meeting ground for educators, engineers, learning scientists, and policymakers who share a common purpose: ensuring that innovation in classrooms is both impactful and responsible. The journal will highlight empirical classroom studies and design-based research, while also serving as a forum for critical perspectives on ethics, equity, and sustainability. We welcome explorations of teacher professional learning, AI literacy, and explainable interfaces that strengthen classroom decision-making. We encourage contributions on educational robotics and embodied learning that connect physical and digital intelligence in pedagogically meaningful ways. Transparency and reproducibility remain cornerstones of our vision, along with policy and governance analyses that examine privacy, accountability, procurement, and evaluation frameworks at the school and system levels. Finally, we will not shy away from publishing carefully documented failures and null results, recognizing their essential role in guiding the field forward.
3. Challenges and Critical Perspectives
Alongside enthusiasm, scholars urge caution. As (
Avraamidou, 2024) reminds us, the momentum of AI in Education can reproduce patterns of extraction and exploitation if ethical considerations are sidelined. (
Noble, 2018) documents how search engines systematically marginalize women and people of color, while (
Otterbacher et al., 2017) demonstrate that algorithms often penalize stereotype-violating content, reinforcing cultural biases. (
Luccioni et al., 2023) further draw attention to the environmental costs of large-scale AI infrastructures. These dynamics compel us to ask whose knowledge is being encoded, who truly gains, and who ultimately pays the price for innovation.
Another concern is the narrowing effect of automated curation on the diversity of knowledge and learning pathways. (
Reich, 2020) shows how educational platforms can default to scalable, efficiency-driven models that flatten pluralism and restrict opportunities for critical inquiry and democratic participation. At the pedagogical level, the danger is not only technical but also human. AI systems that prioritize efficiency and standardization risk eroding the relational and emotional dimensions of learning. Education is more than optimized delivery. It is dialogue, care, and identity formation. If these elements are overlooked, technology may unintentionally dehumanize the very processes that it seeks to enhance.
This journal, therefore, commits to being a forum where opportunities and risks are examined side by side. By welcoming contributions that highlight the transformative potential of AI, as well as those that expose its limitations and unintended consequences, we aim to foster a constructive tension that balances innovation with caution, efficiency with care, and vision with responsibility.
4. Responsible Frameworks: Methods and Governance
Ensuring the credibility and legitimacy of AI in Education requires robust frameworks that integrate methodological rigor, institutional governance, and pedagogical ethics.
First, evaluation must not be confined to model accuracy or technical performance. It should include experimental and quasi-experimental designs, longitudinal analyses, and qualitative inquiry that capture real classroom dynamics (
Shadish et al., 2002). Transparency in datasets, open code, and reproducibility are essential, as is the explicit acknowledgement of limitations and their pedagogical implications.
Second, AI in Education is a governance challenge. Questions of data privacy, accountability, and oversight demand institutional and policy frameworks. Algorithmic systems in schools should be subject to public accountability with clear mechanisms for redress, bias mitigation, and equitable access. At the international level, governance must also address the environmental footprint of AI and align technological innovation with sustainability goals (
De Vries, 2023).
Third, responsible design benefits from embedding pedagogical values within the technical architecture itself. Recent work has proposed teacher-configurable validation approaches, such as the
Ethical Pedagogical Validation Layer (
Chatzichristofis, 2025), which assesses developmental appropriateness, semantic fidelity, and cultural sensitivity before AI outputs reach learners. Such frameworks keep educators in the role of interpreters and gatekeepers, ensuring that pedagogical responsibility remains central. By combining methodological rigor, institutional governance, and pedagogical validation, we can ensure that AI in Education is not only effective but also equitable, transparent, and human-centered. These principles will guide editorial decisions and signal the standards that we expect from published work.
5. Conclusions
AI in Education is at a crossroads. It carries the potential to transform learning experiences, empower teachers, and broaden access, yet it also risks deepening inequities, eroding trust, and prioritizing efficiency over care. The mission of AI in Education (MDPI) is to provide a scholarly forum where these debates unfold with rigor, inclusivity, and balance. We welcome empirical studies, design-based research, critical interventions, policy analyses, and reflective case studies. Above all, we seek contributions that bridge pedagogy and engineering, so that AI remains contextualized by human intention, interpretation, and responsibility.
When a child encounters the output of a generative system, the decisive issue is not only what the machine says but also how the learner understands it and how the teacher frames it. This is the essence of our editorial vision. The voice of the machine must always enter the classroom within a framework of human care, ethical responsibility, and pedagogical purpose.