Next Article in Journal
Student Perceptions of AI-Assisted Writing and Academic Integrity: Ethical Concerns, Academic Misconduct, and Use of Generative AI in Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

AI in Education: Towards a Pedagogically Grounded and Interdisciplinary Field

by
Savvas A. Chatzichristofis
Intelligent Systems Laboratory, Department of Computer Science, Neapolis University Pafos, Pafos 8042, Cyprus
AI Educ. 2025, 1(1), 1; https://doi.org/10.3390/aieduc1010001
Submission received: 25 August 2025 / Accepted: 26 August 2025 / Published: 28 August 2025

Abstract

The rapid expansion of Artificial Intelligence in Education (AIED) has created both remarkable opportunities and pressing concerns. Applications of intelligent tutoring systems, learning analytics, generative models, and educational robotics illustrate the transformative momentum of the field, yet they also raise fundamental questions regarding ethics, equity, and sustainability. The mission of AI in Education (MDPI) is to provide a rigorous, interdisciplinary, and inclusive platform where these debates can unfold. The journal bridges pedagogy and engineering, welcomes both empirical evidence of positive impacts and critical examinations of systemic risks, and advances responsible innovation in real educational settings. By integrating methodological standards, governance perspectives, and pedagogical ethics, including teacher-centered validation approaches, AI in Education positions itself as a space for constructive dialogue that values both enthusiasm and critique. Above all, the journal is committed to a human-centered vision for AIED, so that innovation in classrooms remains grounded in care, responsibility, and educational purpose.

1. Introduction and Opportunities

Artificial Intelligence in Education (AIED) has rapidly evolved into a field that blends computational innovation with pedagogical inquiry, aiming to improve learning processes while raising critical ethical and methodological questions. Yet, AI does not stand alone in shaping future classrooms. A parallel trajectory is found in the rise of educational robotics, which offers hands-on, embodied experiences that cultivate computational thinking, problem solving, and creativity. Educational robotics has grown from early experiments such as Seymour Papert’s Turtle robot into a global industry and a vibrant research area spanning STEM education, language learning, and social development (Evripidou et al., 2020). This convergence of AI and robotics underscores the interdisciplinary nature of contemporary educational innovation, where digital and physical agents jointly redefine teaching and learning practices.
AIED research itself has matured into distinct thematic clusters, including foundations of AI, classroom applications, explainability, AI literacy, and the rise of generative AI. Intelligent tutoring systems and personalized learning platforms demonstrate how machine learning and natural language processing can enhance differentiated instruction. Learning analytics connects data-driven insights to pedagogy, offering teachers actionable feedback on learning trajectories (Siemens, 2013). Evidence from research on teacher preparation in ICT integration further highlights the importance of professional readiness and targeted training, lessons that are equally critical when adopting AI tools in education (Tondeur et al., 2012). The pace and breadth of this progress call for a journal that integrates pedagogical, ethical, and technical perspectives in one common forum.

2. Aims and Scope of the Journal

AI in Education aspires to be more than a repository of research papers. It is envisioned as a meeting ground for educators, engineers, learning scientists, and policymakers who share a common purpose: ensuring that innovation in classrooms is both impactful and responsible. The journal will highlight empirical classroom studies and design-based research, while also serving as a forum for critical perspectives on ethics, equity, and sustainability. We welcome explorations of teacher professional learning, AI literacy, and explainable interfaces that strengthen classroom decision-making. We encourage contributions on educational robotics and embodied learning that connect physical and digital intelligence in pedagogically meaningful ways. Transparency and reproducibility remain cornerstones of our vision, along with policy and governance analyses that examine privacy, accountability, procurement, and evaluation frameworks at the school and system levels. Finally, we will not shy away from publishing carefully documented failures and null results, recognizing their essential role in guiding the field forward.

3. Challenges and Critical Perspectives

Alongside enthusiasm, scholars urge caution. As (Avraamidou, 2024) reminds us, the momentum of AI in Education can reproduce patterns of extraction and exploitation if ethical considerations are sidelined. (Noble, 2018) documents how search engines systematically marginalize women and people of color, while (Otterbacher et al., 2017) demonstrate that algorithms often penalize stereotype-violating content, reinforcing cultural biases. (Luccioni et al., 2023) further draw attention to the environmental costs of large-scale AI infrastructures. These dynamics compel us to ask whose knowledge is being encoded, who truly gains, and who ultimately pays the price for innovation.
Another concern is the narrowing effect of automated curation on the diversity of knowledge and learning pathways. (Reich, 2020) shows how educational platforms can default to scalable, efficiency-driven models that flatten pluralism and restrict opportunities for critical inquiry and democratic participation. At the pedagogical level, the danger is not only technical but also human. AI systems that prioritize efficiency and standardization risk eroding the relational and emotional dimensions of learning. Education is more than optimized delivery. It is dialogue, care, and identity formation. If these elements are overlooked, technology may unintentionally dehumanize the very processes that it seeks to enhance.
This journal, therefore, commits to being a forum where opportunities and risks are examined side by side. By welcoming contributions that highlight the transformative potential of AI, as well as those that expose its limitations and unintended consequences, we aim to foster a constructive tension that balances innovation with caution, efficiency with care, and vision with responsibility.

4. Responsible Frameworks: Methods and Governance

Ensuring the credibility and legitimacy of AI in Education requires robust frameworks that integrate methodological rigor, institutional governance, and pedagogical ethics.
First, evaluation must not be confined to model accuracy or technical performance. It should include experimental and quasi-experimental designs, longitudinal analyses, and qualitative inquiry that capture real classroom dynamics (Shadish et al., 2002). Transparency in datasets, open code, and reproducibility are essential, as is the explicit acknowledgement of limitations and their pedagogical implications.
Second, AI in Education is a governance challenge. Questions of data privacy, accountability, and oversight demand institutional and policy frameworks. Algorithmic systems in schools should be subject to public accountability with clear mechanisms for redress, bias mitigation, and equitable access. At the international level, governance must also address the environmental footprint of AI and align technological innovation with sustainability goals (De Vries, 2023).
Third, responsible design benefits from embedding pedagogical values within the technical architecture itself. Recent work has proposed teacher-configurable validation approaches, such as the Ethical Pedagogical Validation Layer (Chatzichristofis, 2025), which assesses developmental appropriateness, semantic fidelity, and cultural sensitivity before AI outputs reach learners. Such frameworks keep educators in the role of interpreters and gatekeepers, ensuring that pedagogical responsibility remains central. By combining methodological rigor, institutional governance, and pedagogical validation, we can ensure that AI in Education is not only effective but also equitable, transparent, and human-centered. These principles will guide editorial decisions and signal the standards that we expect from published work.

5. Conclusions

AI in Education is at a crossroads. It carries the potential to transform learning experiences, empower teachers, and broaden access, yet it also risks deepening inequities, eroding trust, and prioritizing efficiency over care. The mission of AI in Education (MDPI) is to provide a scholarly forum where these debates unfold with rigor, inclusivity, and balance. We welcome empirical studies, design-based research, critical interventions, policy analyses, and reflective case studies. Above all, we seek contributions that bridge pedagogy and engineering, so that AI remains contextualized by human intention, interpretation, and responsibility.
When a child encounters the output of a generative system, the decisive issue is not only what the machine says but also how the learner understands it and how the teacher frames it. This is the essence of our editorial vision. The voice of the machine must always enter the classroom within a framework of human care, ethical responsibility, and pedagogical purpose.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Avraamidou, L. (2024). Can we disrupt the momentum of the AI colonization of science education? Journal of Research in Science Teaching, 61(10), 2570–2574. [Google Scholar] [CrossRef]
  2. Chatzichristofis, S. A., Tsopozidis, A., Kyriakidou-Zacharoudiou, A., Evripidou, S., & Amanatiadis, A. (2025). Designing an AI-Supported Framework for Literary Text Adaptation in Primary Classrooms. AI, 6(7), 150. [Google Scholar] [CrossRef]
  3. De Vries, A. (2023). The growing energy footprint of artificial intelligence. Joule, 7(10), 2191–2194. [Google Scholar] [CrossRef]
  4. Evripidou, S., Georgiou, K., Doitsidis, L., Amanatiadis, A. A., Zinonos, Z., & Chatzichristofis, S. A. (2020). Educational robotics: Platforms, competitions and expected learning outcomes. IEEE Access, 8, 219534–219562. [Google Scholar] [CrossRef]
  5. Luccioni, A. S., Viguier, S., & Ligozat, A.-L. (2023). Estimating the carbon footprint of BLOOM, a 176B parameter language model. Journal of Machine Learning Research, 24, 1–15. [Google Scholar]
  6. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. [Google Scholar]
  7. Otterbacher, J., Bates, J., & Clough, P. (2017, May 6–11). Competent men and warm women: Gender stereotypes and backlash in image search results. 2017 ACM CHI Conference on Human Factors in Computing Systems (pp. 6620–6631), Denver, CO, USA. [Google Scholar] [CrossRef]
  8. Reich, J. (2020). Failure to disrupt: Why technology alone can’t transform education. Harvard University Press. [Google Scholar]
  9. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin. [Google Scholar]
  10. Siemens, G. (2013). Learning analytics: The emergence of a discipline. American Behavioral Scientist, 57(10), 1380–1400. [Google Scholar] [CrossRef]
  11. Tondeur, J., van Braak, J., Sang, G., Voogt, J., Fisser, P., & Ottenbreit-Leftwich, A. (2012). Preparing pre-service teachers to integrate technology in education: A synthesis of qualitative evidence. Computers & Education, 59(1), 134–144. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chatzichristofis, S.A. AI in Education: Towards a Pedagogically Grounded and Interdisciplinary Field. AI Educ. 2025, 1, 1. https://doi.org/10.3390/aieduc1010001

AMA Style

Chatzichristofis SA. AI in Education: Towards a Pedagogically Grounded and Interdisciplinary Field. AI in Education. 2025; 1(1):1. https://doi.org/10.3390/aieduc1010001

Chicago/Turabian Style

Chatzichristofis, Savvas A. 2025. "AI in Education: Towards a Pedagogically Grounded and Interdisciplinary Field" AI in Education 1, no. 1: 1. https://doi.org/10.3390/aieduc1010001

APA Style

Chatzichristofis, S. A. (2025). AI in Education: Towards a Pedagogically Grounded and Interdisciplinary Field. AI in Education, 1(1), 1. https://doi.org/10.3390/aieduc1010001

Article Metrics

Back to TopTop