AI Technology-Enhanced Learning and Teaching

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: 31 August 2026 | Viewed by 20603

Special Issue Editors


E-Mail Website
Guest Editor
College of Education, Psychology and Social Work, Flinders University, Adelaide 5042, Australia
Interests: AI in education; blended learning; digital learning for development; women in STEM
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Yew Chung College of Early Childhood Education, No. 2, Tin Wan Hill Road, Tin Wan Aberdeen, Hong Kong
Interests: exploring organization factors and management strategies; knowledge management; school management; lesson and learning study; AI in education
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce a Special Issue of Information focused on "AI Technology-Enhanced Learning and Teaching". This Special Issue aims to explore and advance our understanding of how artificial intelligence is transforming educational landscapes across all levels of learning and development.

The rapid evolution of AI technologies has created unprecedented opportunities to enhance learning and teaching processes, personalize educational experiences, and reimagine traditional pedagogical approaches. This Special Issue welcomes contributions that investigate the theoretical foundations, practical applications, and empirical evidence of AI's role in education, including, but not limited to, the following topics:

  • The development and implementation of AI-powered learning environments and platforms;
  • Innovative applications of machine learning and natural language processing in educational contexts;
  • AI-enhanced assessment and feedback systems;
  • Personalized and adaptive learning experiences enabled by AI;
  • The integration of AI with existing educational technologies and learning management systems;
  • Ethical considerations and best practices in implementing AI in education;
  • AI-supported learning analytics and educational data mining;
  • The impact of AI on teacher professional development and pedagogical practices;
  • Cross-cultural and inclusive approaches to AI in education;
  • The role of AI in supporting special education and diverse learning needs;
  • AI applications in formal, informal, and workplace learning contexts;
  • The digital transformation of educational institutions through AI implementation.

We particularly encourage submissions that address the challenges and opportunities in implementing AI-enhanced learning and teaching solutions in various educational contexts, from early childhood education to higher education and professional development. Papers may focus on theoretical frameworks, empirical research, case studies, systematic reviews, or practical applications that contribute to our understanding of how AI can effectively support teaching and learning.

This Special Issue aims to bring together researchers, educators, practitioners, and technologists to share insights and innovations that will shape the future of education. We welcome both theoretical and practical perspectives that can inform the development and implementation of AI-enhanced learning and teaching solutions.

Manuscript submissions should present original research, be well grounded in the current literature, and provide meaningful implications for practice and future research. We seek research that bridges the gap between pedagogical needs and AI applications, ensuring that technological advancements address real-world challenges in various educational contexts.

We look forward to receiving your contributions to this important dialogue on advancing AI technology-enhanced learning and teaching.

Dr. Tianchong Wang
Prof. Dr. Eric C. K. Cheng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI in education
  • human–computer interaction
  • adaptive e-learning
  • inclusive teaching
  • digital learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 976 KB  
Article
Decoupling Fairness Perception from Grading Validity in Digitally Mediated Peer Assessment: A Two-Stage fsQCA Study
by Duen-Huang Huang and Yu-Cheng Wang
Information 2026, 17(5), 411; https://doi.org/10.3390/info17050411 - 25 Apr 2026
Viewed by 137
Abstract
Artificial intelligence (AI) has become increasingly embedded in technology-enhanced learning environments, where peer assessment now serves both instructional and analytic purposes. Beyond allocating feedback and grades, it also produces data that is later interpreted through learning analytics systems. In practice, visible indicators such [...] Read more.
Artificial intelligence (AI) has become increasingly embedded in technology-enhanced learning environments, where peer assessment now serves both instructional and analytic purposes. Beyond allocating feedback and grades, it also produces data that is later interpreted through learning analytics systems. In practice, visible indicators such as students’ fairness perceptions and the degree of agreement among peer raters are often treated as signs that the assessment process is functioning effectively. However, these indicators do not necessarily correspond to grading validity. Students may regard a peer assessment process as fair even when peer-generated ratings remain weakly aligned with expert judgement. This study, therefore, examines whether the socio-technical configurations associated with high perceived fairness in a digitally mediated peer assessment environment also correspond to criterion-referenced grading validity. Data were collected from 215 undergraduate students enrolled in an Artificial Intelligence Foundations course over two consecutive semesters at a university in Taiwan, with instructor ratings serving as an external expert reference within the course context, rather than as a universal ground truth. Because anonymity conditions and semester were fully confounded in the study design, differences linked to anonymity should not be interpreted as isolated causal effects. A two-stage fuzzy-set Qualitative Comparative Analysis (fsQCA) was used. In the first stage, three equifinal configurations associated with high perceived fairness were identified. In the second stage, these configurations were examined against four grading objectivity outcomes: peer–instructor alignment, peer convergence, familiarity bias, and leniency bias. The findings show that fairness perception and grading validity are only partially aligned. Configurations anchored in explicit criterion transparency consistently supported both experiential legitimacy and evaluative accuracy. By contrast, one configuration was associated with high peer convergence while remaining weakly aligned with instructor standards, a pattern described here as false objectivity; this context-dependent configurational finding warrants further investigation across other settings. The study contributes to research on digitally enhanced assessment and learning analytics by showing that fairness perception, peer convergence, and grading validity should be treated as analytically distinct dimensions of assessment quality. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
28 pages, 904 KB  
Article
Supervised Machine Learning-Based Multiclass Classification and Interpretable Feature Importance Analysis of Teacher Job Satisfaction
by Bouabid Qabliyane, Zakaria Khoudi, Abdelamine Elouafi, Abderrahim Salhi and Mohamed Baslam
Information 2026, 17(4), 377; https://doi.org/10.3390/info17040377 - 17 Apr 2026
Viewed by 287
Abstract
This study examines the increasing concern regarding teacher job satisfaction, which has a direct impact on retention, instructional quality, and student outcomes. Traditionally, teacher satisfaction has been evaluated through questionnaires, which present limitations in terms of data efficiency and analyses. In this study, [...] Read more.
This study examines the increasing concern regarding teacher job satisfaction, which has a direct impact on retention, instructional quality, and student outcomes. Traditionally, teacher satisfaction has been evaluated through questionnaires, which present limitations in terms of data efficiency and analyses. In this study, machine learning techniques were applied to data from the PISA 2022 teacher questionnaire in Morocco (N = 2998 lower-secondary teachers). Two multiclass classification targets were defined: overall job satisfaction (SATJOB_class) and satisfaction with the teaching profession (SATTEACH_class), each categorised into three balanced classes: low (<−0.5), medium (−0.5 to 0.5), and high (>0.5) classes. The methodology comprised four key stages. Initially, comprehensive pre-processing was conducted to address missing values, retaining features with fewer than 300 missing entries and applying mode imputation. Subsequently, nine classifiers, including logistic regression, K-nearest neighbours, multinomial naïve Bayes, support vector machine, decision tree, random forest, XGBoost, AdaBoost, and a feed-forward Artificial Neural Network, were evaluated using identical train/test splits and hyperparameter tuning. Third, the model performance was assessed using accuracy, precision, recall, and F1-score. Finally, the feature importance was derived from tree-based and permutation methods. The results indicated that XGBoost outperformed the other models for SATJOB_class with an accuracy (0.61), precision (0.62), recall (0.61), and F1-score (0.61), followed by Random Forest (accuracy = 0.59), Logistic Regression (accuracy = 0.59), and AdaBoost (accuracy = 0.59). For SATTEACH_class, Random Forest led with accuracy (0.59), followed closely by XGBoost (0.58), ANN (0.57), and AdaBoost (0.56). Key predictors of teacher job satisfaction included workload-related variables and school-environment factors, which consistently emerged as the most important features across the best-performing models. The methodology and open-source pipeline provide a reproducible framework for evidence-based interventions to improve teacher retention and instructional quality, offering valuable insights for policymakers and educational administrators. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

18 pages, 5511 KB  
Article
Exploring the Application of Large Language Models (LLMs) in Data Structure Instruction: An Empirical Analysis of Student Learning Outcomes in Computer Science
by Hongzhi Li, Lijun Xiao, Kezhong Lu, Dun Li, Zheqing Zhang and Qishou Xia
Information 2026, 17(4), 353; https://doi.org/10.3390/info17040353 - 8 Apr 2026
Viewed by 482
Abstract
Recent advancements in Large Language Models (LLMs), including ChatGPT, DeepSeek, and Claude, have facilitated their growing integration into computer science education, including data structure courses. Despite their widespread adoption, the association between sustained and informal LLM usage and students’ learning outcomes remains insufficiently [...] Read more.
Recent advancements in Large Language Models (LLMs), including ChatGPT, DeepSeek, and Claude, have facilitated their growing integration into computer science education, including data structure courses. Despite their widespread adoption, the association between sustained and informal LLM usage and students’ learning outcomes remains insufficiently understood. This study seeks to address this gap by empirically examining the association between LLM usage and undergraduate performance in data structure education. We conduct a twelve-week empirical study involving fifty-four undergraduate students, in which LLMs were made freely accessible but neither explicitly encouraged nor discouraged during coursework and assignments. Students’ LLM usage patterns are analyzed in relation to their academic performance across different task types. Findings reveal a significant negative association between extensive reliance on LLMs for cognitively demanding tasks and overall learning outcomes. Additionally, an inverse associative trend is observed between the frequency of LLM usage across some learning activities and academic performance. In contrast, the use of LLMs for supplementary purposes, including conceptual clarification and theoretical understanding, exhibits a notably positive association with final performance. These findings suggest a task-dependent associative relationship between LLM usage and learning outcomes: LLM usage for conceptual learning shows a positive association with the mastery of relevant knowledge when used as a supplementary learning tool, while excessive LLM usage shows a negative association with the development of fundamental analytical and problem-solving skills. This study highlights the importance of carefully integrating LLMs into data structure education to support learning while preserving students’ independent cognitive engagement. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Graphical abstract

18 pages, 2634 KB  
Article
Evidence-Grounded LLM Summarization for Actionable Student Feedback Analysis
by Zhanerke Baimukanova, Yerassyl Saparbekov, Hyesong Ha and Minho Lee
Information 2026, 17(4), 351; https://doi.org/10.3390/info17040351 - 7 Apr 2026
Viewed by 389
Abstract
Analyzing large-scale student feedback is critical for higher education quality assurance, yet manual analysis is inefficient and subjective. This paper proposes an integrated framework that unifies supervised classification, unsupervised clustering, and retrieval-augmented generation (RAG) to produce evidence-grounded and actionable insights. Ensemble-based supervised models [...] Read more.
Analyzing large-scale student feedback is critical for higher education quality assurance, yet manual analysis is inefficient and subjective. This paper proposes an integrated framework that unifies supervised classification, unsupervised clustering, and retrieval-augmented generation (RAG) to produce evidence-grounded and actionable insights. Ensemble-based supervised models perform thematic classification, while multi-encoder embedding fusion enables unsupervised discovery of coherent feedback clusters. A multi-stage RAG module integrates category predictions and cluster structure to retrieve representative evidence and generate transparent summaries with citation traceability. The framework is evaluated on student feedback collected from a Central Asian university and two public benchmarks, EduRABSA and Coursera course reviews, covering seven thematic categories. The supervised ensemble achieves 83.0% accuracy and 0.829 Macro-F1 on the primary dataset, while unsupervised clustering attains a silhouette score of 0.271 under the best fusion strategy. Independent evaluation on external benchmarks yields ensemble accuracy of 81.1% on EduRABSA and 49.8% on Coursera, confirming the framework’s adaptability across diverse educational contexts. By leveraging supervised labels and unsupervised structure, the proposed framework enables evidence-grounded, category-aware LLM-based summaries that faithfully reflect the diversity and distribution of student feedback and support actionable educational decision-making. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

21 pages, 1207 KB  
Article
Insights on the Pedagogical Abilities of AI-Powered Tutors in Math Dialogues
by Verónica Parra, Ana Corica and Daniela Godoy
Information 2026, 17(1), 51; https://doi.org/10.3390/info17010051 - 6 Jan 2026
Viewed by 1042
Abstract
AI-powered tutors that interact with students in question-answering scenarios using large language models (LLMs) as foundational models for generating responses represent a potential scalable solution to the growing demand for one-to-one tutoring. In fields like mathematics, where students often face difficulties, sometimes leading [...] Read more.
AI-powered tutors that interact with students in question-answering scenarios using large language models (LLMs) as foundational models for generating responses represent a potential scalable solution to the growing demand for one-to-one tutoring. In fields like mathematics, where students often face difficulties, sometimes leading to frustration, easy-to-use natural language interactions emerge as an alternative for enhancing engagement and providing personalized advice. Despite their promising potential, the challenges for LLM-based tutors in the math domain are twofold. First, the absence of genuine reasoning and generalization abilities in LLMs frequently results in mathematical errors, ranging from inaccurate calculations to flawed reasoning steps and even the appearance of contradictions. Second, the pedagogical capabilities of AI-powered tutors must be examined beyond simple question-answering scenarios since their effectiveness in math tutoring largely depends on their ability to guide students in building mathematical knowledge. In this paper, we present a study exploring the pedagogical aspects of LLM-based tutors through the analysis of their responses in math dialogues using feature extraction techniques applied to textual data. The use of natural language processing (NLP) techniques enables the quantification and characterization of several aspects of pedagogical strategies deployed in the answers, which the literature identifies as essential for engaging students and providing valuable guidance in mathematical problem-solving. The findings of this study have direct practical implications in the design of more effective math AI-powered tutors as they highlight the most salient characteristics of valuable responses and can thus inform the training of LLMs. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

36 pages, 1309 KB  
Article
Listen Closely: Self-Supervised Phoneme Tracking for Children’s Reading Assessment
by Philipp Ollmann, Erik Sonnleitner, Marc Kurz, Jens Krösche and Stephan Selinger
Information 2026, 17(1), 40; https://doi.org/10.3390/info17010040 - 4 Jan 2026
Viewed by 940
Abstract
Reading proficiency in early childhood is crucial for academic success and intellectual development. However, more and more children are struggling with reading. According to the last PISA study in Austria, one out of five children is dealing with reading difficulties. The reasons for [...] Read more.
Reading proficiency in early childhood is crucial for academic success and intellectual development. However, more and more children are struggling with reading. According to the last PISA study in Austria, one out of five children is dealing with reading difficulties. The reasons for this are diverse, but an application that tracks children while reading aloud and guides them when they experience difficulties could offer meaningful help. Therefore, this proposal explores a prototyping approach for a core component that tracks children’s reading using a self-supervised Wav2Vec2 model with a limited amount of data. Self-supervised learning allows models to learn general representations from large amounts of unlabeled audio, which can then be fine-tuned on smaller, task-specific datasets, making it especially useful when labeled data is limited. Our model is operating on the phonetic level with the help of the International Phonetic Alphabet (IPA). To implement this, the KidsTALC dataset from the Leibniz University Hannover was used, which contains spontaneous speech recordings of German-speaking children. To enhance the training data and improve robustness, several data augmentation techniques were applied and evaluated, including pitch shifting, formant shifting, and speed variation. The models were trained using different data configurations to compare the effects of data variety and quality on recognition performance. The best model trained in this work achieved a phoneme error rate (PER) of 14.3% and a word error rate (WER) of 31.6% on unseen child speech data, demonstrating the potential of self-supervised models for such use cases. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

19 pages, 798 KB  
Article
Addressing the Dark Side of Differentiation: Bias and Micro-Streaming in Artificial Intelligence Facilitated Lesson Planning
by Jason Zagami
Information 2026, 17(1), 12; https://doi.org/10.3390/info17010012 - 23 Dec 2025
Viewed by 978
Abstract
As artificial intelligence (AI) becomes increasingly woven into educational design and decision-making, its use within initial teacher education (ITE) exposes deep tensions between efficiency, equity, and professional agency. A critical action research study conducted across three iterations of a third-year ITE course investigated [...] Read more.
As artificial intelligence (AI) becomes increasingly woven into educational design and decision-making, its use within initial teacher education (ITE) exposes deep tensions between efficiency, equity, and professional agency. A critical action research study conducted across three iterations of a third-year ITE course investigated how pre-service teachers engaged with AI-supported lesson planning tools while learning to design for inclusion. Analysis of 123 lesson plans, reflective journals, and survey data revealed a striking pattern. Despite instruction in inclusive pedagogy, most participants reproduced fixed-tiered differentiation and deficit-based assumptions about learners’ abilities, a process conceptualised as micro-streaming. AI-generated recommendations often shaped these outcomes, subtly reinforcing hierarchies of capability under the guise of personalisation. Yet, through iterative reflection, dialogue, and critical framing, participants began to recognise and resist these influences, reframing differentiation as design for diversity rather than classification. The findings highlight the paradoxical role of AI in teacher education, as both an amplifier of inequity and a catalyst for critical consciousness and argue for the urgent integration of critical digital pedagogy within ITE programmes. AI can advance inclusive teaching only when educators are empowered to interrogate its epistemologies, question its biases, and reclaim professional judgement as the foundation of ethical pedagogy. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

24 pages, 2012 KB  
Article
Assessing the Readability of Russian Textbooks Using Large Language Models
by Andrei Paraschiv, Mihai Dascalu and Marina Solnyshkina
Information 2025, 16(12), 1071; https://doi.org/10.3390/info16121071 - 4 Dec 2025
Cited by 1 | Viewed by 1105
Abstract
This study aims to assess the capability of Large Language Models (LLMs), particularly GPT-4o, to evaluate and modify the complexity level of Russian school textbooks. We lay the groundwork for developing scalable, context-aware methods for readability assessment and text simplification in Russian educational [...] Read more.
This study aims to assess the capability of Large Language Models (LLMs), particularly GPT-4o, to evaluate and modify the complexity level of Russian school textbooks. We lay the groundwork for developing scalable, context-aware methods for readability assessment and text simplification in Russian educational materials, areas where traditional formulas often fall short. Using a corpus of 154 textbooks covering various subjects and grade levels, we evaluate the extent to which LLMs accurately predict the appropriate comprehension level of a text and how well they simplify texts by targeted grade reduction. Our evaluation framework employs GPT-4o as a multi-role agent in three distinct experiments. First, we prompt the model to estimate the target comprehension age for each segment and identify five key linguistic or conceptual features underpinning its assessment. Second, we simulate student comprehension by instructing the model to reason step-by-step through whether the text is understandable for a hypothetical student of the given grade. Third, we examine the model’s ability to simplify selected fragments by reducing their complexity by three grade levels. We further measure model perplexity and output token probabilities to probe the prediction confidence and coherence. Results indicate that while LLMs show considerable potential in complexity assessment (i.e., MAE of 1 grade level), they tend to overestimate text difficulty and face challenges in achieving precise simplification levels. Ease of understanding assessments generally align with human expectations, although texts with abstract, technical, or poetic content (e.g., Physics, History, and Literary Russian) pose challenges. Our study concludes that LLMs can substantially complement traditional readability metrics and assist teachers in developing suitable Russian educational materials. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

20 pages, 2793 KB  
Article
Investigating Brain Activity of Children with Autism Spectrum Disorder During STEM-Related Cognitive Tasks
by Harshith Penmetsa, Rahma Abbasi, Nagasree Yellamilli, Kimberly Winkelman, Jeff Chan, Jaejin Hwang and Kyu Taek Cho
Information 2025, 16(10), 880; https://doi.org/10.3390/info16100880 - 10 Oct 2025
Cited by 1 | Viewed by 1496
Abstract
Children with Autism Spectrum Disorder (ASD) often experience cognitive difficulties that impact learning. This study explores the use of electroencephalogram data collected with the MUSE 2 headband during task-based cognitive sessions to understand how cognitive states in children with ASD change across three [...] Read more.
Children with Autism Spectrum Disorder (ASD) often experience cognitive difficulties that impact learning. This study explores the use of electroencephalogram data collected with the MUSE 2 headband during task-based cognitive sessions to understand how cognitive states in children with ASD change across three structured tasks: Shape Matching, Shape Sorting, and Number Matching. Following signal preprocessing using Independent Component Analysis (ICA), power across various frequency bands was extracted using the Welch method. These features were used to analyze cognitive states in children with ASD in comparison to typically developing (TD) peers. To capture dynamic changes in attention over time, Morlet wavelet transform was applied, revealing distinct brain signal patterns. Machine learning classifiers were then developed to accurately distinguish between ASD and TD groups using the EEG data. Models included Support Vector Machine, K-Nearest Neighbors, Random Forest, an Ensemble method, and a Neural Network. Among these, the Ensemble method achieved the highest accuracy at 0.90. Feature importance analysis was conducted to identify the most influential EEG features contributing to classification performance. Based on these findings, an ASD map was generated to visually highlight the key EEG regions associated with ASD-related cognitive patterns. These findings highlight the potential of EEG-based models to capture ASD-specific neural and attentional patterns during learning, supporting their application in developing more personalized educational approaches. However, due to the limited sample size and participant heterogeneity, these findings should be considered exploratory. Future studies with larger samples are needed to validate and generalize the results. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

28 pages, 2551 KB  
Article
Artificial Intelligence in Education (AIEd): Publication Patterns, Keywords, and Research Focuses
by Weijing Zhu, Luxi Wei and Yinghong Qin
Information 2025, 16(9), 725; https://doi.org/10.3390/info16090725 - 25 Aug 2025
Cited by 4 | Viewed by 6662
Abstract
Since the advent of generative AI, research on AI in Education (AIEd) has experienced explosive growth. This study systematically explores publication dynamics, keyword evolution, and research focuses in AIEd by analyzing 2952 papers from the Web of Science (1990–2024). Using bibliometric methods, 2800 [...] Read more.
Since the advent of generative AI, research on AI in Education (AIEd) has experienced explosive growth. This study systematically explores publication dynamics, keyword evolution, and research focuses in AIEd by analyzing 2952 papers from the Web of Science (1990–2024). Using bibliometric methods, 2800 English publications were screened, with analyses conducted via VOSviewer v1.6.20 and Python v3.11.5. Findings show a surge in publications post-2020, reaching 612 in 2023 and 1216 by November 2024. The US and China are leading contributors, with the University of London and the University of California system as core institutions. Keywords evolved from “AI” and “machine learning” (2018–2020) to “ChatGPT” and “ethics” (post-2022), reflecting dual focuses on technological applications and ethical considerations. Notably, 68% of highly cited papers address ethical controversies, while higher education and medical education emerge as primary application domains, involving personalized learning and intelligent tutoring systems. Cross-disciplinary research is evident, with education studies comprising the largest category. The study reveals AIEd’s shift toward socio-technical integration, highlighting generative AI’s transformative role yet identifying gaps in ethical governance and K-12 research. These insights inform policymakers, journals, and institutions, advocating for enhanced interdisciplinary collaboration and long-term impact research to balance innovation with educational ethics. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

23 pages, 650 KB  
Article
Exercise-Specific YANG Profile for AI-Assisted Network Security Labs: Bidirectional Configuration Exchange with Large Language Models
by Yuichiro Tateiwa
Information 2025, 16(8), 631; https://doi.org/10.3390/info16080631 - 24 Jul 2025
Viewed by 1409
Abstract
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that [...] Read more.
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that could serve as tutors. We address this interoperability gap with an exercise-oriented YANG profile that augments the Internet Engineering Task Force (IETF) ietf-network module with a new network-devices module. The profile expresses Linux interface settings, routing, and firewall rules, and tags each node with roles such as linux-server or linux-firewall. Integrated into our LiNeS Cloud platform, it enables LLMs to both parse and generate machine-readable network states. We evaluated the profile on four topologies—from a simple client–server pair to multi-subnet scenarios with dedicated security devices—using ChatGPT-4o, Claude 3.7 Sonnet, and Gemini 2.0 Flash. Across 1050 evaluation tasks covering profile understanding (n = 180), instance analysis (n = 750), and instance generation (n = 120), the three LLMs answered correctly in 1028 cases, yielding an overall accuracy of 97.9%. Even with only minimal follow-up cues (≦3 turns) —rather than handcrafted prompt chains— analysis tasks reached 98.1% accuracy and generation tasks 93.3%. To our knowledge, this is the first exercise-focused YANG profile that simultaneously captures Linux/iptables semantics and is empirically validated across three proprietary LLMs, attaining 97.9% overall task accuracy. These results lay a practical foundation for artificial intelligence (AI)-assisted security labs where real-time feedback and scenario generation must scale beyond human instructor capacity. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

50 pages, 3777 KB  
Article
Intelligent Teaching Recommendation Model for Practical Discussion Course of Higher Education Based on Naive Bayes Machine Learning and Improved k-NN Data Mining Algorithm
by Xiao Zhou, Ling Guo, Rui Li, Ling Liu and Juan Pan
Information 2025, 16(6), 512; https://doi.org/10.3390/info16060512 - 19 Jun 2025
Cited by 3 | Viewed by 1052
Abstract
Aiming at the existing problems in practical teaching in higher education, we construct an intelligent teaching recommendation model for a higher education practical discussion course based on naive Bayes machine learning and an improved k-NN data mining algorithm. Firstly, we establish the [...] Read more.
Aiming at the existing problems in practical teaching in higher education, we construct an intelligent teaching recommendation model for a higher education practical discussion course based on naive Bayes machine learning and an improved k-NN data mining algorithm. Firstly, we establish the naive Bayes machine learning algorithm to achieve accurate classification of the students in the class and then implement student grouping based on this accurate classification. Then, relying on the student grouping, we use the matching features between the students’ interest vector and the practical topic vector to construct an intelligent teaching recommendation model based on an improved k-NN data mining algorithm, in which the optimal complete binary encoding tree for the discussion topic is modeled. Based on the encoding tree model, an improved k-NN algorithm recommendation model is established to match the student group interests and recommend discussion topics. The experimental results prove that our proposed recommendation algorithm (PRA) can accurately recommend discussion topics for different student groups, match the interests of each group to the greatest extent, and improve the students’ enthusiasm for participating in practical discussions. As for the control groups of the user-based collaborative filtering recommendation algorithm (UCFA) and the item-based collaborative filtering recommendation algorithm (ICFA), under the experimental conditions of the single dataset and multiple datasets, the PRA has higher accuracy, recall rate, precision, and F1 value than the UCFA and ICFA and has better recommendation performance and robustness. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

Review

Jump to: Research

15 pages, 706 KB  
Review
Trends in Publications on AI Tools and Applications in Learning Design to Personalization of Learning—A Scoping Review
by Jacoba Munar-Garau, Bárbara De-Benito-Crosetti and Jesus Salinas
Information 2025, 16(12), 1065; https://doi.org/10.3390/info16121065 - 3 Dec 2025
Viewed by 1014
Abstract
The continuous evolution of learning design (LD) necessitates a systematic review to comprehensively map the available tools that support educational practice, thereby highlighting current trends and development gaps. This study aimed to classify and analyze the features, evolution, and technological maturity of tools [...] Read more.
The continuous evolution of learning design (LD) necessitates a systematic review to comprehensively map the available tools that support educational practice, thereby highlighting current trends and development gaps. This study aimed to classify and analyze the features, evolution, and technological maturity of tools supporting the LD process. A Systematic Literature Review (SLR) was conducted following PRISMA guidelines, analyzing fifty-six tools identified from major academic databases based on their support level (design, implementation, evaluation), user focus, and other characteristics. The analysis revealed a clear transition from static, desktop-based applications to dynamic, web-based, and open-source platforms. Crucially, most tools heavily focus on the initial design phase, exhibiting significant deficiencies in supporting the subsequent implementation and, particularly, the evaluation phases. The findings conclude that while the LD tool landscape is diverse, its development is uneven, suggesting a critical need for future tools to offer more robust, end-to-end lifecycle support and integrate current educational technological innovations such as Generative AI. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

10 pages, 655 KB  
Review
AI-Enhanced Cyber Science Education: Innovations and Impacts
by William Triplett
Information 2025, 16(9), 721; https://doi.org/10.3390/info16090721 - 22 Aug 2025
Cited by 1 | Viewed by 1853
Abstract
Personalized, scalable, and data-driven learning is now possible in cyber science education because of artificial intelligence (AI). This article examines how AI technologies, such as intelligent tutoring, adaptive learning, virtual labs, and AI assessments, are being included in cyber science curricula. Using examples [...] Read more.
Personalized, scalable, and data-driven learning is now possible in cyber science education because of artificial intelligence (AI). This article examines how AI technologies, such as intelligent tutoring, adaptive learning, virtual labs, and AI assessments, are being included in cyber science curricula. Using examples and research studies published between 2020 and 2025 that have undergone peer review, this paper combines qualitative analysis and framework analysis to discover any similarities in how these policies were put into place and their effects. According to the findings, using AI in instruction boosts student interest, increases the number of courses finished, improves skills, and ensures clear instruction in areas such as cybersecurity, digital forensics, and incident response. Ethical issues related to privacy, bias in algorithms, and access issues are also covered in this paper. This study gives a useful approach that helps teachers, curriculum designers, and institution heads use AI in cyber education properly. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

Back to TopTop