Previous Issue
Volume 2, March
 
 

AI Educ., Volume 2, Issue 2 (June 2026) – 8 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
17 pages, 1719 KB  
Article
Decoding Student–Chatbot Dialogues: How Interaction Structure Is Associated with Learning Gains in AI-Assisted Programming
by Ean Teng Khor and Arunaksh Kapoor
AI Educ. 2026, 2(2), 15; https://doi.org/10.3390/aieduc2020015 - 9 May 2026
Viewed by 109
Abstract
The study examines how secondary school students interacted with an AI-powered educational chatbot, MyBotBuddy, while working on a programming task, and how observed dialogue structures were associated with differences in pre- to post-test performance. Fifty students first completed an unassisted pre-test, then attempted [...] Read more.
The study examines how secondary school students interacted with an AI-powered educational chatbot, MyBotBuddy, while working on a programming task, and how observed dialogue structures were associated with differences in pre- to post-test performance. Fifty students first completed an unassisted pre-test, then attempted a chatbot-supported programming task, and finally completed an unassisted post-test. Based on score change, students were grouped into learning gain, no gain, and learning loss categories. Dialogue transcripts were analyzed using Epistemic Network Analysis to identify co-occurring discourse patterns, alongside descriptive sentiment analysis to characterize lexical tone. Students in the learning gain group showed more connected multi-turn patterns involving solution attempts, feedback uptake, knowledge-related contributions, and clarification following feedback. In contrast, the no gain and learning loss groups showed less iterative and less systematically connected interaction structures. Average sentiment polarity differed only slightly across groups and is interpreted cautiously because the dialogue was technical and programming focused. The findings are associational and exploratory rather than causal and suggest that learner engagement with a chatbot may be more informative than interaction frequency alone. We discuss implications for educational chatbot design, especially the potential value of multi-turn scaffolding and reflective prompting, while outlining the need for future validation, baseline-controlled analyses, and experimental work. Full article
Show Figures

Figure 1

26 pages, 1358 KB  
Article
Integrating AI Literacy in Chemistry Graduate Education: Harnessing the Power of Transformer-Based Models
by Yulia V. Sevryugina, Kevyn Collins-Thompson and Nils G. Walter
AI Educ. 2026, 2(2), 14; https://doi.org/10.3390/aieduc2020014 - 4 May 2026
Viewed by 277
Abstract
Rapid adoption of general-purpose generative AI (GenAI) tools, such as ChatGPT, is reshaping teaching, learning, and assessment in chemical education. In this study, we expanded the implementation of GenAI tools within an upper-level undergraduate biochemistry course, providing students access to four distinct platforms: [...] Read more.
Rapid adoption of general-purpose generative AI (GenAI) tools, such as ChatGPT, is reshaping teaching, learning, and assessment in chemical education. In this study, we expanded the implementation of GenAI tools within an upper-level undergraduate biochemistry course, providing students access to four distinct platforms: commercial chatbots (ChatGPT and LearningClues) and in-house tools developed at the University of Michigan (U-M GPT and U-M Maizey). We analyzed student learning outcomes from GenAI-enhanced writing assignments using pre- and post-surveys. Our results show that integrating GenAI into biochemistry coursework promoted effective and responsible usage, enhanced students’ prompt literacy, built ethical awareness, and increased confidence in utilizing these tools. The study specifically examined factors influencing GenAI acceptance: familiarity, perceived usefulness, ease of use, and trust. Trust emerged as the most significant criterion, with a majority of students recommending in-house chatbots for future cohorts due to strong privacy and ethical standards. Over the last year, we observed a shift in student sentiment from excitement about efficiency to emerging concerns about creativity silencing. This highlights the importance of addressing both capabilities and risks of using AI-tools through teaching AI literacy. Full article
Show Figures

Figure 1

22 pages, 331 KB  
Review
Intelligent Immersion: AI and VR Tools for Next-Generation Higher Education
by Konstantinos Liakopoulos and Anastasios Liapakis
AI Educ. 2026, 2(2), 13; https://doi.org/10.3390/aieduc2020013 - 1 May 2026
Viewed by 507
Abstract
Learning is fundamentally human, even as Artificial Intelligence (AI) challenges human exclusivity. AI, along with Virtual Reality (VR), emerges as a powerful tool that is set to transform higher education, the institutional embodiment of this pursuit at its highest level. These technologies offer [...] Read more.
Learning is fundamentally human, even as Artificial Intelligence (AI) challenges human exclusivity. AI, along with Virtual Reality (VR), emerges as a powerful tool that is set to transform higher education, the institutional embodiment of this pursuit at its highest level. These technologies offer the potential not to replace the human factor, but to enhance our ability to create more adaptive, immersive, and truly human-centric learning experiences, aligning powerfully with the emerging vision of Education 5.0, which emphasizes ethical, collaborative learning ecosystems. This research maps how AI and VR tools act as a disruptive force, examining additionally their capabilities and limitations. Moreover, it explores how AI and VR interact to overcome traditional pedagogy’s constraints, fostering environments where technology serves human learning goals. Employing a comprehensive two-month audit of over 60 AI, VR, and AI-VR hybrid tools, the study assesses their functionalities and properties such as technical complexity, cost structures, integration capabilities, and compliance with ethical standards. Findings reveal that AI and VR systems provide significant opportunities for the future of education by providing personalized and captivating environments that encourage experiential learning and improve student motivation across disciplines. Nonetheless, numerous challenges limit widespread adoption, such as advanced infrastructure requirements and strategic planning. By articulating a structured evaluative framework and highlighting emerging trends, this paper provides practical guidance for educational stakeholders seeking to select and implement AI and VR tools in higher education. Full article
17 pages, 765 KB  
Article
From Cognitive Necessity to Cognitive Choice: Higher Education Assessment and Learning in the Age of Generative AI
by Matthew Montebello
AI Educ. 2026, 2(2), 12; https://doi.org/10.3390/aieduc2020012 - 16 Apr 2026
Viewed by 696
Abstract
The widespread adoption of generative artificial intelligence in higher education has intensified debates around assessment, authorship, and academic integrity. This paper argues that such debates obscure a more fundamental pedagogical shift, namely, the decoupling of assessment performance from cognitive engagement. Historically, assessment functioned [...] Read more.
The widespread adoption of generative artificial intelligence in higher education has intensified debates around assessment, authorship, and academic integrity. This paper argues that such debates obscure a more fundamental pedagogical shift, namely, the decoupling of assessment performance from cognitive engagement. Historically, assessment functioned not only as a measure of learning, but also as a structural mechanism that implicitly enforced cognitive engagement. With the advent of GenAI, learners can increasingly produce assessment outputs without necessarily engaging in the cognitive processes traditionally associated with learning. As a result, cognitive engagement has shifted from being a pedagogical necessity to an intentional learner choice. This paper conceptualises this shift as the cognitive engagement gap, wherein successful assessment completion no longer reliably indicates learning or epistemic development. Through a theory-informed conceptual analysis, the paper examines how GenAI reconfigures learning processes, challenges the validity of assessment as a proxy for learning, and exposes long-standing assumptions embedded in assessment-centred pedagogies. In response, the paper proposes a Cognitive Engagement-Centred Assessment (CECA) framework, offering principled guidance for designing assessment that foregrounds cognitive processes, metacognition, and learning assurance in AI-mediated environments. The paper concludes by positioning GenAI not as a threat to assessment, but as a catalyst for more intentional, transparent, and learning-centred pedagogical design. Full article
Show Figures

Graphical abstract

27 pages, 3213 KB  
Systematic Review
Pedagogical Use of Responsible Generative AI in Higher Education; Opportunities and Challenges: A Systematic Literature Review
by Md Zainal Abedin, Ahmad Hayajneh and Bijan Raahemi
AI Educ. 2026, 2(2), 11; https://doi.org/10.3390/aieduc2020011 - 10 Apr 2026
Viewed by 1212
Abstract
Generative Artificial Intelligence (GenAI) is transforming higher education in terms of pedagogy, student involvement, and academic management. This systematic literature review examines 30 peer-reviewed articles published from 2019 to 2025, adhering to PRISMA 2020 and Kitchenham’s methodologies. Descriptive and thematic analyses highlight five [...] Read more.
Generative Artificial Intelligence (GenAI) is transforming higher education in terms of pedagogy, student involvement, and academic management. This systematic literature review examines 30 peer-reviewed articles published from 2019 to 2025, adhering to PRISMA 2020 and Kitchenham’s methodologies. Descriptive and thematic analyses highlight five opportunities: (a) tailored and adaptive education; (b) deliberate fostering of critical thinking; (c) enhanced accessibility for varied learners; (d) teaching innovation via multimodal content development and feedback; and (e) collaborative methods that regard AI as a co-teacher. Four ongoing challenge categories also surface: (a) risks to academic integrity; (b) excessive dependence on GenAI that may hinder learner independence; (c) inconsistent faculty preparedness and change-management abilities; and (d) differences in infrastructure and policy both regionally and globally. Intersecting ethical issues, such as data privacy, algorithmic bias, transparency, and accountability, highlight the necessity for governance that aligns with institutional risk and reflects societal values. Analyzing the recent literature, this systematic review offers four contributions: (a) a recommendation model for responsible GenAI implementation in higher education institutions; (b) a framework for sustainable integration of GenAI; (c) a highlight of the future research recommendations; and (d) an integrated policy and pedagogical recommendations roadmap. These models emphasize the integration of AI literacy, ethical considerations, and critical thinking goals into educational programs. The review advocates for a strategic, stakeholder-focused approach to implementation that enhances rather than replaces human instruction, thus connecting GenAI’s educational potential with ethical, context-aware avenues for institutional transformation. Full article
Show Figures

Figure 1

21 pages, 281 KB  
Essay
Mobile AI as Relational Infrastructure: Translating Meaning and Belonging in International Student Onboarding
by Jimmie Manning, Md Mahmudur Rahman and Ngozi Oguejiofor
AI Educ. 2026, 2(2), 10; https://doi.org/10.3390/aieduc2020010 - 7 Apr 2026
Viewed by 620
Abstract
Generative artificial intelligence in higher education is typically framed as either a student productivity tool or an institutional disruption. This agenda-setting essay advances a third position: mobile generative AI functions as relational infrastructure—a persistent communicative presence that mediates identity, meaning-making, and belonging [...] Read more.
Generative artificial intelligence in higher education is typically framed as either a student productivity tool or an institutional disruption. This agenda-setting essay advances a third position: mobile generative AI functions as relational infrastructure—a persistent communicative presence that mediates identity, meaning-making, and belonging during institutional transition. Focusing on international graduate student onboarding, we abductively “think through” two complementary theoretical lenses. Constitutive Artificial Intelligence Identity Theory (CAIIT) conceptualizes AI as a co-constitutive participant in identity formation through recursive communicative feedback loops. Language Convergence/Meaning Divergence (LC/MD) theory explains how shared institutional language masks interpretive gaps across intercultural and bureaucratic contexts. Reading narrative vignettes through these frameworks, we argue that generative AI is neither simple curricular tool nor personal aid, but both relational and organizational infrastructure, redistributing translational, emotional, and interpretive labor in higher education. We outline four design principles for AI-integrated onboarding: distinguish communicative scaffolding from cognitive replacement; design systems that assume meaning divergence; center equity in AI-mediated transitions; and anticipate ethical risk. Reframing AI as relational infrastructure shifts AI-in-education research toward relational accountability and institutional care. Full article
36 pages, 3201 KB  
Article
Using an Ethical Framework to Examine K-12 Leaders’ Perceived Risks About AI
by Raffaella Borasi, Jonathan Herington, Karen J. DeAngelis, Yu Jung Han, Sharon Mason, Patricia Vaughan-Brogan and David E. Miller
AI Educ. 2026, 2(2), 9; https://doi.org/10.3390/aieduc2020009 - 1 Apr 2026
Viewed by 702
Abstract
This article contributes to current debates around the ethics of using AI in K-12 education by extending an ethical framework based on the constructs of wellbeing, autonomy and justice to examine how AI may differentially impact specific stakeholders. Data about K-12 building [...] Read more.
This article contributes to current debates around the ethics of using AI in K-12 education by extending an ethical framework based on the constructs of wellbeing, autonomy and justice to examine how AI may differentially impact specific stakeholders. Data about K-12 building and district leaders’ perceptions of AI risks were collected during the 2023–24 school year in Western New York as part of an exploratory sequential mixed methods study, which included semi-structured interviews with a diverse group of 36 K-12 leaders, followed by a survey (n = 160). Survey findings confirm K-12 leaders’ widespread recognition, although at varying levels of concern, of AI risks related to (a) students cheating, (b) students’ other questionable AI uses, (c) educators’ questionable AI uses, (d) increasing inequities due to AI, (e) cybersecurity and privacy breaches, and to a much lesser extent, the (f) potential for job replacement. The ethical analysis reveals major differences in the implications of each of these six kinds of AI risk for the wellbeing, autonomy, and justice of K-12 educators, K-12 students, and society, respectively, as well as tensions between competing needs and values, which in turn call for risk-specific strategies as well as inevitable tradeoffs. A comparison with a study of musicians’ perceptions of AI using the same ethical framework reveals interesting similarities and differences in ethical concerns about AI in different fields, suggesting the value of more cross-disciplinary studies. Full article
Show Figures

Figure 1

42 pages, 1499 KB  
Article
Auditing GenAI Literature Search Workflows: A Replicable Protocol for Traceable, Accountable Retrieval in Student-Facing Inquiry
by Cristo Leon and Michelle Kudelka
AI Educ. 2026, 2(2), 8; https://doi.org/10.3390/aieduc2020008 - 25 Mar 2026
Viewed by 1063
Abstract
Generative AI systems increasingly mediate how students retrieve literature and generate citations, shifting methodological rigor toward the maintenance of an auditable evidence trail. This study audits the search stage of AI-assisted literature review work, focusing on retrieval performance and citation traceability rather than [...] Read more.
Generative AI systems increasingly mediate how students retrieve literature and generate citations, shifting methodological rigor toward the maintenance of an auditable evidence trail. This study audits the search stage of AI-assisted literature review work, focusing on retrieval performance and citation traceability rather than downstream screening or synthesis. Four widely accessible tools were compared across two retrieval postures, and Boolean queries were executed against Scopus and evaluated against a DOI-verified librarian baseline built from Scopus, Web of Science, and Google Scholar. Using a canonical prompt and a bounded top-k capture rule (k = 20), each bibliographic record was evaluated for DOI traceability, DOI resolution integrity, metadata accuracy, and run-to-run drift. Records were screened through staged title/abstract and full-text eligibility review, and the final set included 37 studies after quality appraisal was 37 studies. Across sixteen audit runs, natural-language prompting frequently produced under-target yields, recurrent integrity failures, and low overlap with the librarian benchmark. Boolean translation improved run completion and increased the proportion of auditable records, but reproducibility remained unstable across repeated runs. These findings show that correctness at the record level does not ensure stability at the evidence-set level. Limitations include the bounded tool set, the search-stage focus, and the absence of downstream screening or synthesis evaluation. Retrieval posture, therefore, emerges as a practical governance lever for AI-assisted literature review workflows and supports the use of a student-facing verification checklist anchored in DOI verification and transparent protocol capture. This research received no external funding. OSF registration: Open Science Framework, 10.17605/OSF.IO/U8NHT. The manuscript reports the final included set as n = 37, states no external funding, and lists the OSF registration DOI. Full article
Show Figures

Graphical abstract

Previous Issue
Back to TopTop