This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessSystematic Review
Machine Learning Approaches for Evaluating STEM Education Projects Within Artificial Intelligence in Education: A Systematic Review of Frameworks and Rubrics
by
Catalina Muñoz-Collazos
Catalina Muñoz-Collazos *
and
Carolina González-Serrano
Carolina González-Serrano
Faculty of Electronics and Telecommunications Engineering, University of Cauca, Popayán 190002, Colombia
*
Author to whom correspondence should be addressed.
Submission received: 23 October 2025
/
Revised: 28 November 2025
/
Accepted: 28 November 2025
/
Published: 3 December 2025
Featured Application
The findings provide evidence for integrating machine learning into AI-driven educational assessment models grounded in STEM frameworks and evaluation rubrics.
Abstract
Objectively evaluating STEM education projects is essential for ensuring fairness, consistency, and evidence-based instructional decisions. Recent interest in data-informed approaches highlights the use of standardized frameworks, rubric-based assessment, and computational techniques to support more transparent evaluation practices. This systematic review examines how machine learning (ML) techniques—within the broader field of Artificial Intelligence in Education (AIED)—contribute to the evaluation of STEM projects. Following Kitchenham’s guidelines and PRISMA 2020, searches were conducted across Scopus, Web of Science, ScienceDirect, and IEEE Xplore, resulting in 39 studies published between 2020 and 2025. The findings show that current STEM frameworks emphasize disciplinary integration, inquiry, creativity, and collaboration, while rubrics operationalize these principles through measurable criteria. ML techniques have been applied to classification, prediction, and multidimensional analysis; however, these computational approaches remain largely independent from established frameworks and rubric structures. Existing ML models demonstrate feasibility for modeling evaluative indicators but do not yet integrate pedagogical constructs within automated assessment pipelines. By synthesizing evidence across frameworks, rubrics, and ML techniques, this review clarifies the methodological landscape and identifies opportunities to advance scalable, transparent, and pedagogically aligned evaluation practices. The results provide a conceptual foundation for future work aimed at developing integrative and trustworthy ML-supported evaluation systems in STEM education.
Share and Cite
MDPI and ACS Style
Muñoz-Collazos, C.; González-Serrano, C.
Machine Learning Approaches for Evaluating STEM Education Projects Within Artificial Intelligence in Education: A Systematic Review of Frameworks and Rubrics. Appl. Sci. 2025, 15, 12812.
https://doi.org/10.3390/app152312812
AMA Style
Muñoz-Collazos C, González-Serrano C.
Machine Learning Approaches for Evaluating STEM Education Projects Within Artificial Intelligence in Education: A Systematic Review of Frameworks and Rubrics. Applied Sciences. 2025; 15(23):12812.
https://doi.org/10.3390/app152312812
Chicago/Turabian Style
Muñoz-Collazos, Catalina, and Carolina González-Serrano.
2025. "Machine Learning Approaches for Evaluating STEM Education Projects Within Artificial Intelligence in Education: A Systematic Review of Frameworks and Rubrics" Applied Sciences 15, no. 23: 12812.
https://doi.org/10.3390/app152312812
APA Style
Muñoz-Collazos, C., & González-Serrano, C.
(2025). Machine Learning Approaches for Evaluating STEM Education Projects Within Artificial Intelligence in Education: A Systematic Review of Frameworks and Rubrics. Applied Sciences, 15(23), 12812.
https://doi.org/10.3390/app152312812
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article metric data becomes available approximately 24 hours after publication online.