Artificial Intelligence Algorithms and Generative AI in Education

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms for Multidisciplinary Applications".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 7743

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor Assistant
E.T.S. de Ingenieros Industriales, Universidad Politécnica de Madrid, 28006 Madrid, Spain
Interests: linguistic typology; discourse analysis; computational linguistics; educational technology; language education; teaching methods; parallel computing; programming languages; human-computer interaction; artificial intelligence

Special Issue Information

Dear Colleagues,

The integration of artificial intelligence (AI) algorithms and generative AI in education represents a transformative approach that is reshaping the landscape of teaching and learning. These advanced technologies offer unprecedented opportunities to enhance educational experiences, personalize instruction, and revolutionize assessment methods. While AI has been making inroads in education for some time, recent advancements in generative AI present both exciting possibilities and complex challenges for educators, students, and educational institutions.

For this Special Issue, we invite contributions that explore the intersection of AI algorithms and generative AI with educational practices, examining their impact on learning outcomes, pedagogical strategies, and the overall educational ecosystem. We are particularly interested in research that investigates the potential of these technologies to create more engaging, efficient, and equitable learning environments.

We welcome submissions on a variety of topics, including, but not limited to, the following:

  • Applications of AI algorithms in educational content creation and curation;
  • Use of generative AI for personalized tutoring and adaptive learning;
  • AI-powered assessment and feedback systems;
  • Ethical considerations and challenges in implementing AI in education;
  • Impact of AI and generative AI on teacher roles and professional development;
  • Integration of AI technologies in learning management systems;
  • AI-driven educational analytics and decision-making processes;
  • Generative AI in language learning and writing instructions;
  • AI algorithms for early detection of learning difficulties and intervention;
  • Use of AI in special education and accessibility;
  • Implications of AI and generative AI for curriculum design and development;
  • Evaluation of AI-enhanced educational tools and platforms.

Authors are encouraged to submit their research articles, case studies, and critical analyses on these topics to advance our understanding of how AI algorithms and generative AI can shape the future of education.

Prof. Dr. Antonio Sarasa Cabezuelo
Guest Editor

Dr. María Estefanía Avilés Mariño
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence in education
  • generative AI
  • educational technology
  • personalized learning
  • AI-powered assessment
  • adaptive learning systems
  • educational data mining
  • natural language processing in education
  • machine learning for education
  • AI ethics in education
  • intelligent tutoring systems
  • AI-assisted teaching
  • educational chatbots
  • predictive analytics in education

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 948 KiB  
Article
A Scoring Algorithm for the Early Prediction of Academic Risk in STEM Courses
by Vanja Čotić Poturić, Sanja Čandrlić and Ivan Dražić
Algorithms 2025, 18(4), 177; https://doi.org/10.3390/a18040177 - 21 Mar 2025
Viewed by 251
Abstract
Educational data mining (EDM) and learning analytics (LA) are widely applied to predict student performance, particularly in determining academic success or failure. This study presents the development of a scoring algorithm for the early identification of students at risk of failing science, technology, [...] Read more.
Educational data mining (EDM) and learning analytics (LA) are widely applied to predict student performance, particularly in determining academic success or failure. This study presents the development of a scoring algorithm for the early identification of students at risk of failing science, technology, engineering, and mathematics (STEM) courses. The proposed approach follows a structured process: First, educational data are collected, processed, and statistically analyzed. Next, numerical variables are transformed into dichotomous predictors, and their relevance is assessed using Cramér’s V measure to quantify their association with course outcomes. The final step involves constructing a scoring system that dynamically evaluates student performance over 15 weeks of instruction. Prospective validation of the model demonstrated excellent predictive performance (accuracy = 0.93, sensitivity = 0.95, specificity = 0.92), confirming its effectiveness in early risk detection. The resulting scoring algorithm is distinguished by its methodological simplicity, ease of implementation, and adaptability to different educational settings, making it a practical tool for timely interventions. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Figure 1

34 pages, 6263 KiB  
Article
Advancing AI in Higher Education: A Comparative Study of Large Language Model-Based Agents for Exam Question Generation, Improvement, and Evaluation
by Vlatko Nikolovski, Dimitar Trajanov and Ivan Chorbev
Algorithms 2025, 18(3), 144; https://doi.org/10.3390/a18030144 - 4 Mar 2025
Cited by 1 | Viewed by 1234
Abstract
The transformative capabilities of large language models (LLMs) are reshaping educational assessment and question design in higher education. This study proposes a systematic framework for leveraging LLMs to enhance question-centric tasks: aligning exam questions with course objectives, improving clarity and difficulty, and generating [...] Read more.
The transformative capabilities of large language models (LLMs) are reshaping educational assessment and question design in higher education. This study proposes a systematic framework for leveraging LLMs to enhance question-centric tasks: aligning exam questions with course objectives, improving clarity and difficulty, and generating new items guided by learning goals. The research spans four university courses—two theory-focused and two application-focused—covering diverse cognitive levels according to Bloom’s taxonomy. A balanced dataset ensures representation of question categories and structures. Three LLM-based agents—VectorRAG, VectorGraphRAG, and a fine-tuned LLM—are developed and evaluated against a meta-evaluator, supervised by human experts, to assess alignment accuracy and explanation quality. Robust analytical methods, including mixed-effects modeling, yield actionable insights for integrating generative AI into university assessment processes. Beyond exam-specific applications, this methodology provides a foundational approach for the broader adoption of AI in post-secondary education, emphasizing fairness, contextual relevance, and collaboration. The findings offer a comprehensive framework for aligning AI-generated content with learning objectives, detailing effective integration strategies, and addressing challenges such as bias and contextual limitations. Overall, this work underscores the potential of generative AI to enhance educational assessment while identifying pathways for responsible implementation. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Figure 1

23 pages, 998 KiB  
Article
AI-Enhanced Design and Application of High School Geography Field Studies in China: A Case Study of the Yellow (Bohai) Sea Migratory Bird Habitat Curriculum
by Binglin Liu, Weijia Zeng, Weijiang Liu, Yi Peng and Nini Yao
Algorithms 2025, 18(1), 47; https://doi.org/10.3390/a18010047 - 15 Jan 2025
Cited by 1 | Viewed by 1016
Abstract
China’s Yellow (Bohai) Sea bird habitat is an important ecological region. Its unique ecology and challenges provide rich resources for research and study. Our course design concept is supported by AI technology, and improves students’ abilities through innovative functions such as dynamic data [...] Read more.
China’s Yellow (Bohai) Sea bird habitat is an important ecological region. Its unique ecology and challenges provide rich resources for research and study. Our course design concept is supported by AI technology, and improves students’ abilities through innovative functions such as dynamic data support, personalized learning paths, immersive research and study experience, and diversified evaluation mechanisms. The course content revolves around the “human–land coordination concept”, including pre-trip thinking, research and study during the trip, and post-trip exhibition learning, covering regional cognition, remote sensing image analysis, field investigation, and protection plan display activities. ERNIE Bot participates in optimizing the learning path throughout the process. The course evaluation system starts from the three dimensions of “land to people”, “people to land”, and the “coordination of the human–land relationship”, adopts processes and final evaluation, and uses ERNIE Bot to achieve real-time monitoring, data analysis, personalized reports, and dynamic feedback, improving the objectivity and efficiency of evaluation, and helping students and teachers optimize learning and teaching. However, AI has limitations in geographical research and study, such as insufficient technical adaptability, the influence of students’ abilities and habits, and the adaptation of teachers’ role changes. To this end, optimization strategies such as improving data quality and technical platforms, strengthening student technical training, enhancing teachers’ AI application capabilities, and enriching AI functions and teaching scenarios are proposed to enhance the application effect of AI in geographical research and promote innovation in educational models and student capacity building. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Figure 1

27 pages, 6723 KiB  
Article
Exploring Early Learning Challenges in Children Utilizing Statistical and Explainable Machine Learning
by Mithila Akter Mim, M. R. Khatun, Muhammad Minoar Hossain, Wahidur Rahman and Arslan Munir
Algorithms 2025, 18(1), 20; https://doi.org/10.3390/a18010020 - 4 Jan 2025
Viewed by 953
Abstract
To mitigate future educational challenges, the early childhood period is critical for cognitive development, so understanding the factors influencing child learning abilities is essential. This study investigates the impact of parenting techniques, sociodemographic characteristics, and health conditions on the learning abilities of children [...] Read more.
To mitigate future educational challenges, the early childhood period is critical for cognitive development, so understanding the factors influencing child learning abilities is essential. This study investigates the impact of parenting techniques, sociodemographic characteristics, and health conditions on the learning abilities of children under five years old. Our primary goal is to explore the key factors that influence children’s learning abilities. For our study, we utilized the 2019 Multiple Indicator Cluster Surveys (MICS) dataset in Bangladesh. Using statistical analysis, we identified the key factors that affect children’s learning capability. To ensure proper analysis, we used extensive data preprocessing, feature manipulation, and model evaluation. Furthermore, we explored robust machine learning (ML) models to analyze and predict the learning challenges faced by children. These include logistic regression (LRC), decision tree (DT), k-nearest neighbor (KNN), random forest (RF), gradient boosting (GB), extreme gradient boosting (XGB), and bagging classification models. Out of these, GB and XGB, with 10-fold cross-validation, achieved an impressive accuracy of 95%, F1-score of 95%, and receiver operating characteristic area under the curve (ROC AUC) of 95%. Additionally, to interpret the model outputs and explore influencing factors, we used explainable AI (XAI) techniques like SHAP and LIME. Both statistical analysis and XAI interpretation revealed key factors that influence children’s learning difficulties. These include harsh disciplinary practices, low socioeconomic status, limited maternal education, and health-related issues. These findings offer valuable insights to guide policy measures to improve educational outcomes and promote holistic child development in Bangladesh and similar contexts. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Figure 1

19 pages, 419 KiB  
Article
Fair and Transparent Student Admission Prediction Using Machine Learning Models
by George Raftopoulos, Gregory Davrazos and Sotiris Kotsiantis
Algorithms 2024, 17(12), 572; https://doi.org/10.3390/a17120572 - 13 Dec 2024
Viewed by 1430
Abstract
Student admission prediction is a crucial aspect of academic planning, offering insights into enrollment trends, resource allocation, and institutional growth. However, traditional methods often lack the ability to address fairness and transparency, leading to potential biases and inequities in the decision-making process. This [...] Read more.
Student admission prediction is a crucial aspect of academic planning, offering insights into enrollment trends, resource allocation, and institutional growth. However, traditional methods often lack the ability to address fairness and transparency, leading to potential biases and inequities in the decision-making process. This paper explores the development and evaluation of machine learning models designed to predict student admissions while prioritizing fairness and interpretability. We employ a diverse set of algorithms, including Logistic Regression, Decision Trees, and ensemble methods, to forecast admission outcomes based on academic, demographic, and extracurricular features. Experimental results on real-world datasets highlight the effectiveness of the proposed models in achieving competitive predictive performance while adhering to fairness metrics such as demographic parity and equalized odds. Our findings demonstrate that machine learning can not only enhance the accuracy of admission predictions but also support equitable access to education by promoting transparency and accountability in automated systems. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Graphical abstract

13 pages, 1631 KiB  
Article
Analysis of ChatGPT-3.5’s Potential in Generating NBME-Standard Pharmacology Questions: What Can Be Improved?
by Marwa Saad, Wesam Almasri, Tanvirul Hye, Monzurul Roni and Changiz Mohiyeddini
Algorithms 2024, 17(10), 469; https://doi.org/10.3390/a17100469 - 21 Oct 2024
Cited by 1 | Viewed by 1692
Abstract
ChatGPT by OpenAI is an AI model designed to generate human-like responses based on diverse datasets. Our study evaluated ChatGPT-3.5’s capability to generate pharmacology multiple-choice questions adhering to the NBME guidelines for USMLE Step exams. The initial findings show ChatGPT’s rapid adoption and [...] Read more.
ChatGPT by OpenAI is an AI model designed to generate human-like responses based on diverse datasets. Our study evaluated ChatGPT-3.5’s capability to generate pharmacology multiple-choice questions adhering to the NBME guidelines for USMLE Step exams. The initial findings show ChatGPT’s rapid adoption and potential in healthcare education and practice. However, concerns about its accuracy and depth of understanding prompted this evaluation. Using a structured prompt engineering process, ChatGPT was tasked to generate questions across various organ systems, which were then reviewed by pharmacology experts. ChatGPT consistently met the NBME criteria, achieving an average score of 13.7 out of 16 (85.6%) from expert 1 and 14.5 out of 16 (90.6%) from expert 2, with a combined average of 14.1 out of 16 (88.1%) (Kappa coefficient = 0.76). Despite these high scores, challenges in medical accuracy and depth were noted, often producing “pseudo vignettes” instead of in-depth clinical questions. ChatGPT-3.5 shows potential for generating NBME-style questions, but improvements in medical accuracy and understanding are crucial for its reliable use in medical education. This study underscores the need for AI models tailored to the medical domain to enhance educational tools for medical students. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Figure 1

Back to TopTop