Artificial Intelligence Algorithms and Generative AI in Education (2nd Edition)

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms for Multidisciplinary Applications".

Deadline for manuscript submissions: 31 May 2026 | Viewed by 2325

Special Issue Editors


E-Mail Website
Guest Editor
Facultad de Informática, Universidad Complutense de Madrid, 28040 Madrid, Spain
Interests: nonSQL databases; machine learning; artificial intelligence; e-learning; programming languages
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
E.T.S. de Ingenieros Industriales, Universidad Politécnica de Madrid, 28006 Madrid, Spain
Interests: linguistic typology; discourse analysis; computational linguistics; educational technology; language education; teaching methods; parallel computing; programming languages; human-computer interaction; artificial intelligence

Special Issue Information

Dear Colleagues,

The integration of artificial intelligence (AI) algorithms and generative AI in education represents a transformative approach that is reshaping the landscape of teaching and learning. These advanced technologies offer unprecedented opportunities to enhance educational experiences, personalize instruction, and revolutionize assessment methods. While AI has been making inroads in education for some time, the recent advancements in generative AI present both exciting possibilities and complex challenges for educators, students, and educational institutions.

For this Special Issue, we invite contributions that explore the intersection of AI algorithms and generative AI with educational practices, examining their impact on learning outcomes, pedagogical strategies, and the overall educational ecosystem. We are particularly interested in research that investigates the potential of these technologies to create more engaging, efficient, and equitable learning environments.

We welcome submissions on a variety of topics, including, but not limited to, the following:

  • Applications of AI algorithms in educational content creation and curation;
  • Use of generative AI for personalized tutoring and adaptive learning;
  • AI-powered assessment and feedback systems;
  • Ethical considerations and challenges in implementing AI in education;
  • Impact of AI and generative AI on teacher roles and professional development;
  • Integration of AI technologies in learning management systems;
  • AI-driven educational analytics and decision-making processes;
  • Generative AI in language learning and writing instruction;
  • AI algorithms for early detection of learning difficulties and intervention;
  • Use of AI in special education and accessibility;
  • Implications of AI and generative AI for curriculum design and development;
  • Evaluation of AI-enhanced educational tools and platforms.

Authors are encouraged to submit their research articles, case studies, and critical analyses on these topics to advance our understanding of how AI algorithms and generative AI can shape the future of education.

Dr. Antonio Sarasa Cabezuelo
Dr. María Estefanía Avilés Mariño
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence in education
  • generative AI
  • educational technology
  • personalized learning
  • AI-powered assessment
  • adaptive learning systems
  • educational data mining
  • natural language processing in education
  • machine learning for education
  • AI ethics in education
  • intelligent tutoring systems
  • AI-assisted teaching
  • educational chatbots
  • predictive analytics in education

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 689 KB  
Article
The Impacts of Large Language Model Addiction on University Students’ Mental Health: Gender as a Moderator
by Ibrahim A. Elshaer and Alaa M. S. A. Azazz
Algorithms 2025, 18(12), 789; https://doi.org/10.3390/a18120789 - 12 Dec 2025
Abstract
This study tested the impacts of large language model (LLM) addiction on the mental health of university students, employing gender as a moderator. Data was collected from 750 university students from multiple fields of study (i.e., business, medical, education, and social sciences) using [...] Read more.
This study tested the impacts of large language model (LLM) addiction on the mental health of university students, employing gender as a moderator. Data was collected from 750 university students from multiple fields of study (i.e., business, medical, education, and social sciences) using a self-administered questionnaire. Partial Least Squares Structural Equation Modeling (PLS-SEM) was employed to analyze the collected data; this study tested the impacts of three LLM addiction dimensions—withdrawal and health problems (W&HPs), time management and performance (TM&P), and social comfort (SC)—on stress, depression, and anxiety as dimensions of mental health disorders. Findings indicate that TM&P and SC had a significant positive impact on stress, depression, and anxiety, implying that overdependence (as an early-stage precursor and behavioral antecedent of LLM addiction) on LLMs for academic achievements and emotional reassurance contributed to higher levels of psychological distress. On the contrary, W&HP showed a weak but significant negative correlation with stress, signaling a probable self-regulatory coping approach. Furthermore, gender was found to successfully moderate several of the tested relationships, where male university students showed stronger relationships between LLM addiction dimensions and adverse mental health consequences, whereas female university students proved greater emotional constancy and resilience. Theoretically, this paper extends the digital addiction frameworks into the AI setting, highlighting gendered models of emotional exposure. Practically, this study highlights the urgent need for gender-sensitive digital well-being intervention programs that address the overuse of LLMs, a prominent category of generative AI. These outcomes emphasize the significance of balancing technological involvement with mental health protection, determining how LLM usage can specifically contribute to digital addiction and related psychological consequences among university students. Full article
Show Figures

Figure 1

28 pages, 1076 KB  
Article
From Subsumption to Semantic Mediation: A Generative Orchestration Architecture for Autonomous Systems
by Andrei Kojukhov, Ilya Levin and Arkady Bovshover
Algorithms 2025, 18(12), 773; https://doi.org/10.3390/a18120773 - 8 Dec 2025
Viewed by 146
Abstract
This paper extends Rodney Brooks’ subsumption architecture into the era of Agentic AI by replacing its priority arbiter with a Generative Orchestrator that performs semantic mediation—interpreting heterogeneous agent outputs and integrating them into a coherent action rather than merely arbitrating among them. [...] Read more.
This paper extends Rodney Brooks’ subsumption architecture into the era of Agentic AI by replacing its priority arbiter with a Generative Orchestrator that performs semantic mediation—interpreting heterogeneous agent outputs and integrating them into a coherent action rather than merely arbitrating among them. Brooks’ original model (1986) demonstrated that autonomous behavior can emerge from parallel reactive layers without symbolic representation, establishing principles later recognized as foundational to agentic systems: environmental responsiveness, autonomy, and goal-directed action. Contemporary Agentic AI, however, requires capabilities beyond mechanical response—decision-making, adaptive strategy, and goal pursuit. We therefore reinterpret subsumption layers as four interacting agent types: reflex, model-based, goal-based, and utility-based, coordinated through semantic mediation. The Generative Orchestrator employs large language models not for content generation but for decision synthesis, enabling integrative agentic behavior. This approach merges real-time responsiveness with interpretive capacity for learning, reasoning, and explanation. An autonomous driving case study demonstrates how the architecture sustains behavioral autonomy while generating human-interpretable rationales for its actions. Validation was conducted through a Python-based proof-of-concept on an NVIDIA platform, reproducing the scenario to evaluate and confirm the architecture. This framework delineates a practical pathway toward advancing autonomous agents from reactive control to fully Agentic AI systems capable of operating in open, uncertain environments. Full article
Show Figures

Figure 1

28 pages, 514 KB  
Article
Dynamic Assessment with AI (Agentic RAG) and Iterative Feedback: A Model for the Digital Transformation of Higher Education in the Global EdTech Ecosystem
by Rubén Juárez, Antonio Hernández-Fernández, Claudia de Barros-Camargo and David Molero
Algorithms 2025, 18(11), 712; https://doi.org/10.3390/a18110712 - 11 Nov 2025
Viewed by 964
Abstract
This article formalizes AI-assisted assessment as a discrete-time policy-level design for iterative feedback and evaluates it in a digitally transformed higher-education setting. We integrate an agentic retrieval-augmented generation (RAG) feedback engine—operationalized through planning (rubric-aligned task decomposition), tool use beyond retrieval (tests, static/dynamic analyzers, [...] Read more.
This article formalizes AI-assisted assessment as a discrete-time policy-level design for iterative feedback and evaluates it in a digitally transformed higher-education setting. We integrate an agentic retrieval-augmented generation (RAG) feedback engine—operationalized through planning (rubric-aligned task decomposition), tool use beyond retrieval (tests, static/dynamic analyzers, rubric checker), and self-critique (checklist-based verification)—into a six-iteration dynamic evaluation cycle. Learning trajectories are modeled with three complementary formulations: (i) an interpretable update rule with explicit parameters η and λ that links next-step gains to feedback quality and the gap-to-target and yields iteration-complexity and stability conditions; (ii) a logistic-convergence model capturing diminishing returns near ceiling; and (iii) a relative-gain regression quantifying the marginal effect of feedback quality on the fraction of the gap closed per iteration. In a Concurrent Programming course (n=35), the cohort mean increased from 58.4 to 91.2 (0–100), while dispersion decreased from 9.7 to 5.8 across six iterations; a Greenhouse–Geisser corrected repeated-measures ANOVA indicated significant within-student change. Parameter estimates show that higher-quality, evidence-grounded feedback is associated with larger next-step gains and faster convergence. Beyond performance, we engage the broader pedagogical question of what to value and how to assess in AI-rich settings: we elevate process and provenance—planning artifacts, tool-usage traces, test outcomes, and evidence citations—to first-class assessment signals, and outline defensible formats (trace-based walkthroughs and oral/code defenses) that our controller can instrument. We position this as a design model for feedback policy, complementary to state-estimation approaches such as knowledge tracing. We discuss implications for instrumentation, equity-aware metrics, reproducibility, and epistemically aligned rubrics. Limitations include the observational, single-course design; future work should test causal variants (e.g., stepped-wedge trials) and cross-domain generalization. Full article
Show Figures

Figure 1

35 pages, 546 KB  
Article
Enhancing Semi-Supervised Learning in Educational Data Mining Through Synthetic Data Generation Using Tabular Variational Autoencoder
by Georgios Kostopoulos, Nikos Fazakis, Sotiris Kotsiantis and Yiannis Dimakopoulos
Algorithms 2025, 18(10), 663; https://doi.org/10.3390/a18100663 - 19 Oct 2025
Cited by 1 | Viewed by 677
Abstract
This paper presents TVAE-SSL, a novel semi-supervised learning (SSL) paradigm that involves Tabular Variational Autoencoder (TVAE)-sampled synthetic data injection into the training process to enhance model performance under low-label data conditions in Educational Data Mining tasks. The algorithm begins with training a TVAE [...] Read more.
This paper presents TVAE-SSL, a novel semi-supervised learning (SSL) paradigm that involves Tabular Variational Autoencoder (TVAE)-sampled synthetic data injection into the training process to enhance model performance under low-label data conditions in Educational Data Mining tasks. The algorithm begins with training a TVAE on the given labeled data to generate imitative synthetic samples of the underlying data distribution. These synthesized samples are treated as additional unlabeled data and combined with the original unlabeled ones in order to form an augmented training pool. A standard SSL algorithm (e.g., Self-Training) is trained using a base classifier (e.g., Random Forest) on the combined dataset. By expanding the pool of unlabeled samples with realistic synthetic data, TVAE-SSL improves training sample quantity and diversity without introducing label noise. Large-scale experiments on a variety of datasets demonstrate that TVAE-SSL can outperform baseline supervised models in the full labeled dataset in terms of accuracy, F1-score and fairness metrics. Our results demonstrate the capacity of generative augmentation to enhance the effectiveness of semi-supervised learning for tabular data. Full article
Show Figures

Graphical abstract

Back to TopTop