Next Article in Journal
Novel Matching Algorithm for Effective Drone Detection and Identification by Radio Feature Extraction
Next Article in Special Issue
Business Logic Vulnerabilities in the Digital Era: A Detection Framework Using Artificial Intelligence
Previous Article in Journal
Detection of Wild Mushrooms Using Machine Learning and Computer Vision
Previous Article in Special Issue
Crowdsourcing and Digital Information: Looking for a Future Research Agenda
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Impact of AI-Driven Application Programming Interfaces (APIs) on Educational Information Management

by
David Pérez-Jorge
1,
Miriam Catalina González-Afonso
1,
Anthea Gara Santos-Álvarez
1,
Zeus Plasencia-Carballo
2 and
Carmen de los Ángeles Perdomo-López
2,*
1
Department of Didactics and Educational Research, Faculty of Education, University of La Laguna, 38204 San Cristóbal de La Laguna, Spain
2
Department of Specific Didactics, Faculty of Education, University of La Laguna, 38204 San Cristóbal de La Laguna, Spain
*
Author to whom correspondence should be addressed.
Information 2025, 16(7), 540; https://doi.org/10.3390/info16070540
Submission received: 28 May 2025 / Revised: 20 June 2025 / Accepted: 23 June 2025 / Published: 25 June 2025
(This article belongs to the Special Issue New Information Communication Technologies in the Digital Era)

Abstract

In today’s digitalized educational landscape, the intelligent use of information is essential for personalizing learning, improving assessment accuracy, and supporting data-driven pedagogical decisions. This systematic review examines the integration of Application Programming Interfaces (APIs) powered by Artificial Intelligence (AI) to enhance educational information management and learning processes. A total of 27 peer-reviewed studies published between 2013 and 2025 were analyzed. First, a general description of the selected works was provided, followed by a breakdown by dimensions in order to identify recurring patterns, stated interests and gaps in the current scientific literature on the use of AI-driven APIs in Education. The findings highlight five main benefits: data interoperability, personalized learning, automated feedback, real-time student monitoring, and predictive performance analytics. All studies addressed personalization, 74.1% focused on platform integration, and 37% examined automated feedback. Reported outcomes include improvements in engagement (63%), comprehension (55.6%), and academic achievement (48.1%). However, the review also identifies concerns about privacy, algorithmic bias, and limited methodological rigor in existing research. The study concludes with a conceptual model that synthesizes these findings from pedagogical, technological, and ethical perspectives, providing guidance for more adaptive, inclusive, and responsible uses of AI in education.

1. Introduction

1.1. Artificial Intelligence and Educational Transformation: New Horizons in Teaching and Learning

The growing digitalization of educational environments in recent years has underscored the need to integrate tools and systems through more efficient and adaptive technological solutions. Within this context of transformation, AI has emerged as a key enabler. Its capacity to generate algorithms capable of making recommendations, predictions, decisions and learning across diverse contexts [1] has established it as one of the most influential technologies in teaching and learning processes.
This development has been propelled by growing demands for task automation, personalized learning, and dynamic content generation, leading to a new pedagogical paradigm centered on intensive data use. In this framework, various concepts related to AI in education have gained prominence, such as personalized learning [2], intelligent tutoring systems [3], virtual assistants [4], immersive and interactive experiences [5] and the use of data to optimize academic performance [6].
Although research on AI in education has a well-established trajectory, its development has accelerated over the past decade with the rise of techniques such as machine learning, natural language processing (NLP), and deep neural networks, all of which require large volumes of training data [1,3].
In this context, Application Programming Interfaces (APIs) play a fundamental role as communication and data exchange channels between Learning Management Systems (LMS) and other platforms. Their use has become widespread, particularly in higher education, where they facilitate the integration of functionalities such as content delivery, assessment, synchronous interaction, and collaborative spaces [7,8,9], enabling seamless API integration that supports platform interoperability without the need for complex processes [10].
Several leading software providers offer APIs for educational environments, each enabling specific functionalities such as learning analytics, personalization, integration with learning management systems (LMS) and performance tracking. Among the most prominent providers are Google for Education APIs (e.g., Classroom API and Admin SDK), Microsoft Graph API (used in Microsoft Teams and Education Insights), Moodle’s Web Service APIs and Instructure’s Canvas LMS APIs. These tools support system interoperability and the real-time integration of learning data. Costs vary depending on licensing models and usage levels: some APIs, such as Google’s, are freely available for basic educational use, whereas others operate under commercial licenses or require institutional subscriptions (e.g., Canvas or Microsoft Education solutions).
This study provides a structured and critical synthesis of recent developments in the use of AI-driven APIs in education, examining their functionalities and emerging trends. It explores the extent to which these technologies contribute to the intelligent management of educational data, platform integration, and pedagogical adaptation, while also addressing identified limitations, including ethical and regulatory challenges. As a result, the review proposes a conceptual framework that articulates the functional dimensions necessary for developing informed, connected and adaptable educational ecosystems.
Simultaneously, it analyzes the digital transformation of education, highlighting the growing integration of artificial intelligence and APIs as drivers of change. It then presents the methodological design of the systematic review conducted, followed by an analysis of the main findings related to intelligent data management, interoperability, personalized learning and ethical challenges. Finally, the conclusions are presented alongside a conceptual framework that guides future lines of research in the development of connected, adaptive, and data-driven educational ecosystems.

1.2. Transformation of Instructional Planning and the Impact of Generative AI on Education

The integration of AI in education responds to the need to support teachers in their daily tasks. It is estimated that approximately 40% of the time devoted to teaching is spent on tasks that could be automated [11] and this is precisely where AI can serve as a valuable resource. AI can enhance the efficiency of educational processes and resources by enabling automated learning, more personalized planning, predictive analysis of academic performance and the creation of activities tailored to the specific needs of each student [12].
One of the areas where this impact is already visible is assessment. As Lloret et al. [13] highlight, automated evaluation, powered by machine learning algorithms, increases objectivity, reduces implementation time and transforms a traditionally complex process into a more agile task. These tools can identify learning errors, anticipate dropout risks and detect potential difficulties [1].
Solutions such as intelligent tutoring systems, chatbots and virtual assistants are currently transforming learning experiences. These tools not only optimize time and organization but also foster adaptability and educational inclusion [14,15,16]. However, their use raises questions that go beyond technical considerations. The educational community is beginning to reflect on the ethical and pedagogical challenges this transformation entails. Some studies emphasize the need to rethink assessment practices, advocating for models that foster creativity and critical thinking—competencies that AI is not yet capable of replicating [17].
Generative AI has emerged forcefully in the educational context. This system, based on generative neural networks and Large Language Models (LLMs), can produce original content—texts, images, music—by mimicking human creative processes [18]. Despite its impressive capacity to generate information, it also entails significant responsibility: it is essential to verify the accuracy and relevance of the content it produces [19]. Furthermore, its ethical and responsible use requires the development of new skills, competencies, and values for life and work in this new AI-driven era [20,21].
Tools such as ChatGPT V4.0 (OpenAI), Copilot, or Gemini (Google) are becoming established as pedagogical allies, transforming teaching practices through the creation of high-quality materials and the optimization of processes such as assessment [22]. According to Berggren and Söderström [23], the use of virtual assistants in the classroom not only enhances learning efficiency but also increases student motivation and satisfaction.

1.3. Artificial Intelligence and APIs in Education: Trasnformation, Ethical Challenges and New Research Perspectives

The implementation of AI and APIs in education is reshaping traditional pedagogical approaches. One of the most notable contributions is personalized learning, understood as the adaptation of content, pacing and strategies to the individual characteristics of students through machine learning algorithms [24]. This process requires close collaboration between technology and human agents to prevent automated decisions from lacking pedagogical relevance [25,26] and it must also consider emotional, motivational, and well-being factors [27].
Within this digital ecosystem, APIs are essential for interoperability, automation and content adaptation. They connect educational platforms and support functions like personalized feedback, resource recommendations, engagement monitoring and performance prediction [28,29,30,31,32,33,34,35].
However, the intensive use of AI and APIs has also raised concerns about algorithmic culture, surveillance, bias, academic integrity and equity [36,37,38,39]. Since educational AI systems rely on probabilistic models, they may oversimplify complex processes, produce errors, and reproduce biases in predictions and recommendations if not designed and implemented under ethical principles [40,41,42,43,44]. These concerns also encompass student data privacy [45,46,47,48] and algorithmic transparency, which underscores the importance of developing explainable AI (XAI) systems that allow for understanding and auditing automated decision-making processes [40].
Despite these challenges, multiple opportunities for research are emerging. Notable areas include the design of ethical and regulatory frameworks [33,49], the development of explainable AI supported by APIs [50], studies on the pedagogical impact of personalization [2,51,52], bias mitigation [34,53], and the promotion of human-AI collaboration models that reinforce the role of educators [40].
In this context of AI- and API-mediated educational transformation, it is essential to deepen our understanding of how these technologies are shaping teaching and learning ecosystems. The review of existing studies reveals a growing implementation of AI-based solutions for educational management, but also raises questions regarding their technical integration, pedagogical impact, and ethical implications. Based on the theoretical framework and context presented, this study pursues, through a systematic review, the following aims: (1) identify the main AI-powered educational APIs used for intelligent information management in learning contexts; (2) analyze how these APIs enable effective integration between platforms and educational systems; (3) explore the pedagogical benefits of automated data use—such as personalized learning, performance tracking, immediate feedback, and risk prediction; (4) examine the ethical, technical, and regulatory challenges associated with data management via AI; and (5) propose a conceptual framework that synthesizes the contributions of these technologies to the development of more informed, connected, and adaptive educational environments.

2. Materials and Methods

2.1. Design

To address the objectives of this study, a systematic review of the literature was conducted following the guidelines established in the PRISMA 2020 statement for systematic reviews [54]. Through the strategies and procedures of this methodology, it was possible to identify, select and synthesize the most significant and relevant studies related to the topic of this work. This approach ensures transparency, rigor and replicability in the review process.
This study responds to a timely and relevant issue, as it contributes to understanding the importance of how AI-driven APIs are transforming the management and use of educational data. In a world where digital technology is spreading at an unimaginable pace, these tools enable system compatibility, personalized learning and data-informed teaching options. However, despite advances in API-focused research, there is no comprehensive systematic review that examines their effects on education and technology in an integrated way. Although ethical aspects were not considered as a specific criterion in this study, their relevance is acknowledged and addressed, as they represent an emerging and current issue that is generating increasing debate concerning implications for education. This review seeks to address that gap by providing a critical overview that encourages reflection on the need to develop AI-based learning environments that are effective, flexible and responsible.
To define the scope of this review, key concepts were selected related to AI-based educational technologies, interoperability through APIs, educational information management and learning personalization strategies. The consideration of these elements is fundamental to this study, as it allows the analysis to focus on the importance of developing more efficient, adapted and student-centered digital learning environments.

2.2. Inclusion and Exclusion Criteria

To ensure the relevance, appropriateness and quality of the selected studies, inclusion and exclusion criteria were established based on the objectives of the study. These criteria allowed the analysis to be limited to works that rigorously addressed the use of AI-powered APIs in educational contexts. Both empirical and theoretical studies were considered, as long as they demonstrated academic rigor and met quality standards and offered a relevant contribution to the analysis of educational information management, learning personalization, or pedagogical decision-making based on the use of AI-powered APIs in educational contexts. Table 1 summarizes the criteria applied during the screening and selection process of the manuscripts.
To collect and select the documents for this study, four relevant scientific databases were considered: Scopus, Web of Science (WoS), IEEE Xplore and Dialnet. These databases were selected considering the study’s scope and its multidisciplinary focus, which required exploring research in the fields of educational technology, computer science, AI and education and pedagogy. These databases index research from both the educational and technological domains, which is essential for identifying studies that converge on APIs and AI.
Before conducting the final search, a preliminary review was carried out to identify and select the keywords that best aligned with the study’s focus. Based on this keyword selection, specific search equations were designed for each database, using Boolean operators (AND, OR) and truncated terms in both English and Spanish, in accordance with the principles of the PRISMA statement.
The keywords used included terms such as: “artificial intelligence”, “application programming interface”, “educational technology”, “learning analytics”, “student data”, “personalization”, “integration”, “decision making” and their Spanish equivalents.
Search equations were tailored for each database with the aim of maximizing document retrieval without restricting the results to overly specific term variations. Terms in both English and Spanish were selected according to the objectives of the review and combined using the aforementioned Boolean operators. Table 2 presents the truncated terms used in each search engine.
In accordance with the PRISMA 2020 guidelines, four reviewers independently screened the titles and abstracts of all records retrieved through the database searches. The same reviewers independently assessed the eligibility of the full-text articles. Any discrepancies were resolved through discussion and consensus. Likewise, data extraction was carried out independently by these reviewers using a standardized template to ensure consistency and accuracy.

2.3. Study Selection Procedure

This systematic review was conducted between April and May 2025. Once the search was completed across the four selected databases, the results underwent a screening process in accordance with the previously established criteria, in order to delimit the final sample of studies included in the analysis.
The selection process was carried out independently and in a standardized manner. In the first phase, the search equations designed for each database were applied. Duplicates—identified primarily between Scopus and WoS—were then removed using the Mendeley reference manager. Subsequently, the predefined inclusion and exclusion criteria were applied, followed by a review of the titles and abstracts of the documents.
In the final phase, a full-text reading of the 100 preselected manuscripts was conducted to assess their alignment with the objectives of this study. As a result, a final sample of 27 articles was obtained.
For the included studies, a data extraction sheet was developed based on the Cochrane Consumers and Communication Group template [55], adapted to the specific criteria of this study on the use of AI-powered APIs in educational contexts. The entire process is summarized in the flow diagram presented in Figure 1.

2.4. Inter-Rater Agreement Analysis

To ensure the adequacy of the selection and classification of the studies included in this review, an inter-rater agreement analysis was conducted. This analysis involved four reviewers who examined the 100 documents identified after the initial screening phase, applying the predefined inclusion and exclusion criteria. These criteria were based on dimensions related to the use of AI-powered APIs, intelligent educational information management, personalized learning, learning analytics, and data-informed pedagogical decision-making.
Out of the 100 documents evaluated, agreement was reached in 88 cases. In 12 studies, discrepancies appeared regarding their inclusion. These differences were solved in a second round of joint review, during which the evaluators discussed each case and reached a consensus on the final decision. As a result of this process, a final sample of 27 manuscripts was obtained for analysis.
To assess the extent of agreement among reviewers beyond what could be attributed to chance, the Perreault and Leigh coefficient [56] was applied. The resulting value was I = 0.938, indicating a high level of inter-rater agreement and strong consistency in the application of the criteria.
As a complementary measure, Perplexity AI was used as a tool for semantic and content validation. This AI tool was not considered an independent evaluator, but rather a methodological support resource to verify the alignment of the selected documents with the analysis criteria. Its use enhanced the transparency of the procedure and provided additional support for the reviewers’ decisions, particularly in ambiguous cases.
I = [ (F − 1)/F ] * [ 1 − ( Σ p_j (1 − p_j) )/(N * (k − 1)/k) ]
  • I = Reliability coefficient
  • F = Number of judges or evaluators
  • p_j = Proportion of judges who assigned a response to category j
  • N = Number of items or units assessed
  • k = Number of possible response categories

3. Results

Based on the manuscript selection described above, 27 studies were included in the qualitative synthesis of the results in this analysis. These documents were systematically collected and synthesized according to key dimensions derived from the study’s objectives. Table 3 presents this information concisely; through this synthesis, a comparative view of the selected studies is offered, allowing for the identification of patterns, trends and gaps in the current scientific literature on the topic.
This first results section provides a description of the studies included, focusing on their structural and methodological characteristics. It addresses aspects such as the stated objectives, countries of origin, the educational level in which they are situated, the technologies employed, the pedagogical functions of the APIs and AI systems used, as well as the main impacts observed on learning and the methodological limitations reported. This breakdown by dimensions allows for the identification of recurring patterns, stated interests and gaps in the current body of knowledge regarding the use of AI-based APIs in education.
Distribution of Studies by Year of Publication
There is evidence of increased scientific output between 2024 and 2025, which together account for approximately 81.4% of the total studies included. This rise in publications in recent years reflects the growing use of AI and APIs in the educational field, in line with the global expansion of these technologies. Although fewer in number, contributions prior to 2025 demonstrate a gradual evolution of interest in this topic, with pioneering studies dating back to 2015. See year-by-year distribution in Figure 2.
Distribution of Studies by Country
The reviewed studies are concentrated in countries with a strong track record in promoting research on AI. The United States, China, Germany and Japan lead in the number of publications, followed by emerging countries such as Mexico and South Korea. However, the data also reveal a clear underrepresentation of regions with lower levels of digital development, particularly Sub-Saharan Africa and parts of Latin America. This highlights the need to promote greater equity in the generation of knowledge and AI-driven educational innovation. See country distribution in Figure 3.

3.1. Study Objectives

The analysis of the 27 selected studies reveals a clear focus on improving teaching and learning processes through the use of APIs and AI systems. Specifically, 88.9% of the studies concentrated on applied objectives aimed at learning personalization, automated feedback generation, predictive analytics, or enhancing student engagement and motivation. In contrast, 11.1% addressed more technical objectives, such as the creation of scalable architectures or the development of interoperable microservice ecosystems, with notable contributions in this area from Hervás et al. [74] and Garefalakis et al. [73]. See distribution by objective scope in Figure 4.

3.2. Educational Level

Regarding educational level, 77.8% of the studies focused on higher education or vocational training, while only 18.5% addressed experiences in secondary education and a single study (3.7%)—that of Takii et al. [65]—was situated in the context of primary education. This distribution reveals a clear concentration of technological innovation at post-compulsory levels, highlighting a concerning gap in the earlier stages of the educational system, where personalization and learning monitoring could have a crucial impact. See distribution by educational level in Figure 5.

3.3. Technologies Used

With regard to the technologies used, nearly half of the studies (44.4%) incorporated generative models such as ChatGPT and other OpenAI APIs, reflecting a clear trend toward the consolidation of AI in the educational field. Alongside these tools, 25.9% of the studies developed custom AI models tailored to specific tasks such as emotion recognition, question generation, or engagement analysis. Technologies such as xAPI and eye-tracking were also used, though less frequently (14.8%), primarily in studies integrating learning analytics and real-time data, as seen in the works of Santhosh et al. [75] and Castellanos-Reyes et al. [61]. See distribution by type of technology used in Figure 6.

3.4. Functions of Applied APIs and AI

From a functional perspective, the applications of AI and APIs were concentrated in five main areas. The most represented was the generation of personalized feedback, present in 37% of the studies. This function showed positive effects on the quality of learning processes, as observed in the works of Kinder et al. [31] and Venter et al. [34]. This was followed by content recommendation systems (25.9%), mechanisms for monitoring engagement and attention (22.2%), predictive applications (14.8%) and embedded intelligent tutors (11.1%), such as the xPST system described by Gilbert et al. [30]. See distribution based on the functions of the APIs and AIs used in Figure 7.

3.5. Observed Impact on Learning

Regarding the impact reported in the studies, the most prominent outcomes included increased student engagement (63%), improved content comprehension (55.6%), enhanced academic performance (48.1%) and strengthened intrinsic motivation (33.3%). These improvements were particularly notable when AI models were combined with real-time adaptive systems. Huang et al. [58] demonstrated that an instant messaging system powered by ChatGPT significantly increased student participation and critical thinking, while [71] showed that redrawing Chinese characters through generative AI enhanced semantic understanding among non-native learners. Similarly, Gharbi and Mohtadi [63] reported performance improvements of up to 25% in MOOCs personalized with ChatGPT and Santhosh et al. [75] highlighted that the combination of eye-tracking and AI-generated adaptive interventions boosted engagement, focus and comprehension in digital environments. See distribution of observed impact in Figure 8.

3.6. Methodological Limitations

Despite these positive findings, several common methodological limitations were identified. A total of 40.7% of the studies involved small or unrepresentative samples, limiting the generalizability of the results. Additionally, 33.3% lacked longitudinal evaluation, making it difficult to assess the sustained impact of these technologies over time. Technical barriers were also observed, such as dependence on digital infrastructure (29.6%), ethical and pedagogical challenges related to the quality of automatically generated content, data privacy and equitable access. Studies such as those by Okonkwo and Ade-Ibijola [76] and Castellanos-Reyes et al. (2025) [61] emphasize, among other aspects, the need to complement automation with teacher supervision and digital literacy strategies. See the distribution of methodological limitations in Figure 9.
Based on the results derived from this initial analysis, we now present an in-depth examination of the findings focused on the five objectives that guided this systematic review. This analysis enables a more precise identification of the contributions made by the studies regarding the use of APIs and AI in the management of educational information, their integration into digital ecosystems, the personalization of learning, the generation of automated feedback and data-driven pedagogical decision-making. The results presented from this perspective facilitate an assessment of the extent to which each objective has been addressed, as well as the identification of emerging trends, unresolved challenges and opportunities for future research in the field of AI-assisted digital education.
Use of APIs and AI in the Management of Educational Information
The reviewed studies reveal a significant incorporation of AI-based technologies for the automated, intelligent and scalable management of educational data. In particular, 22 out of the 27 studies directly analyzed how APIs and AI systems enable the organization of information on student progress, the detection of behavioral patterns and the adaptation of learning pathways. Notable examples include the work of Castellanos-Reyes et al. [61], who used GPT to automatically analyze students’ contributions in virtual forums and Farhood et al. [68], who compared ten performance prediction models with highly accurate results. Likewise, studies such as Zhang [2] and Santhosh et al. [75] combined advanced AI models with biometric or behavioral data (such as eye-tracking) to enhance real-time data-driven decision-making.
Integration of APIs into Educational Platforms and Ecosystems
It was observed that at least 12 studies integrated APIs with educational platforms (LMSs, assessment systems, digital libraries, or interactive apps). This integration was addressed from technical perspectives [74], with open modular APIs for assistive technologies), as well as from the standpoint of interoperability and content personalization [62,65]. These studies consistently highlight that APIs facilitate the creation of more connected, adaptive and sustainable environments that can meet learning needs with low maintenance costs and high service reusability.
Personalization of Learning through Generative AI
Fifteen out of the 27 studies included specific applications of learning personalization through generative AI systems, particularly using language models such as ChatGPT. This trend was clearly observed in experiences involving the adaptation of content, activities, or presentation styles to students’ interests and individual characteristics. Examples include the study by Pesovski et al. [70], which allowed learners to choose between instructional styles such as “traditional teacher,” “Batman,” or “Wednesday Addams,” and that of Huang et al. [58], who designed an instant messaging platform with automated contextual responses. Other studies, such as those by Gharbi and Mohtadi [63] and Valverde-Rebaza et al. [72], employed AI to generate adaptive assessments and differentiated learning pathways in large-scale online courses.
Automated and Adaptive Feedback
AI-generated feedback was another recurring contribution among the selected studies. Ten of the studies focused on immediate, personalized and pedagogically effective feedback provided through generative models. Notable examples include the study by Kinder et al. [31], which compared feedback generated by ChatGPT to that provided by human experts for preservice teachers and the study by Venter et al. [34], which implemented an integrated application that achieved high levels of adherence to the principles of effective feedback. In both cases, students perceived the feedback as useful, as well as they spent more time engaging with it—reinforcing its educational value.
Performance Prediction and Data-Driven Pedagogical Decision-Making
Another relevant contribution relates to the capacity of APIs and AI to predict student performance and guide data-driven pedagogical decision-making. Four of the reviewed studies developed predictive models that yielded high levels of accuracy, such as those by Zhang (2024) [2] and Farhood et al. [68], which used LSTM and machine learning models to classify students by performance level. Also noteworthy are the studies by Sajja et al. [67], focused on vocational training settings, where GPT-4.0 was used to personalize exam preparation and [1], who applied sequential mining techniques to detect effective behaviors in a web-based inquiry science environment.
Overall, these results reveal the presence of five key dimensions across the reviewed studies. All of the studies (100%) examined the use of AI for educational information management, as well as mechanisms for learning personalization, confirming the centrality of these two functions in current research and proposals related to AI. Additionally, 74.1% of the studies analyzed the integration or development of APIs as a strategy to enhance interoperability between platforms and educational services. AI-generated feedback appeared in 37% of the cases, while only 14.8% addressed performance prediction as a basis for pedagogical decision-making. These figures reinforce the most established lines of analysis and, at the same time, highlight the areas that require further exploration in future research. See Figure 10 for the distribution of studies across key dimensions.

4. Discussion

Based on the results presented so far, the discussion is developed in relation to the five objectives that guided this study.

4.1. AI-Driven Educational APIs for Intelligent Information Management

The findings reveal the consolidation of AI-based APIs as key tools for the collection, organization, analysis and visualization of educational data. A total of 81.5% of the studies analyzed present AI applications aimed at managing real-time data on student behavior and performance. This reflects a growing trend to position AI as a driver of pedagogical transformation, fostering adaptive approaches and educational decisions grounded in data-based evidence. Research by Zhang [2], Castellanos-Reyes et al. [61] and Farhood et al. [68] demonstrates how deep learning models can construct more precise and dynamic learner profiles by combining data mining techniques with AI-assisted coding. This intelligent management automates previously manual processes and also enables new forms of evidence-based personalization and pedagogical intervention.
Moreover, the predominance of studies with applied aims (88.9%) indicates a strong focus on improving learning outcomes and fostering interaction. By contrast, only 11.1% of the studies addressed more technical aspects, such as the design of architectures or the development of interoperable APIs. This suggests that current proposals are largely built upon existing digital infrastructures, highlighting the need to develop more design-focused approaches to address the specific technical challenges present in educational contexts.

4.2. Effective Integration Across Educational Platforms and Systems

One of the most notable findings of the review is the growing development and implementation of interoperable APIs integrated into digital educational environments. Seventy-four percent of the studies report experiences in which APIs are connected to LMSs, assessment systems, virtual laboratories, or reading platforms, optimizing their performance. Studies such as those by Hervás et al. [74] and Garefalakis et al. [73], which feature microservice architectures and the use of xAPI, demonstrate that technical interoperability is both feasible and desirable, enabling data sharing across systems, as well as adaptation to individual student pathways and needs. This integration supports the creation of more connected, scalable and adaptive educational ecosystems.
However, this type of integration is predominantly concentrated in higher education institutions within technologically advanced countries. The limited representation of experiences in compulsory education (3.7%) and in contexts with low digital infrastructure underscores the structural bias in the implementation of these technologies. Expanding such integration into more vulnerable educational systems is essential to ensure a fairer and more equitable integration.

4.3. Pedagogical Benefits of Automated Data Use

The pedagogical benefits observed in the reviewed studies are both significant and diverse. The automated use of data through AI enables learning personalization, real-time feedback, attention tracking and performance prediction. From content recommendation systems [65] to adaptive feedback mechanisms [31,34], generative models have proven effective in enhancing student engagement (63%), content comprehension (55.6%) and academic performance (48.1%). The automation of feedback, in particular, allows for richer and more continuous interaction, especially in virtual environments or contexts with high teaching loads. Moreover, the predictive capacity of some systems [2,67]; suggests an increasing use of AI as a pedagogical anticipation tool, enabling early detection of risks such as academic failure or demotivation. However, these benefits must be contextualized and supported by teacher guidance and solid pedagogical planning.
Although most of the studies included in this review highlight the benefits of integrating AI and APIs in education—particularly in relation to personalization, decision-making, and efficiency—some also report limitations or less favorable outcomes. In this regard, studies such as [2] emphasized that the use of AI-based learning analytics tools did not yield significant improvements in academic performance compared to conventional teaching. Other studies, such as [71], pointed to the challenges associated with adopting AI-based APIs, including limited institutional support, lack of technical training, and low teacher engagement. Moreover, some studies underscored user skepticism regarding data privacy and the opacity of AI systems [29], which hindered the consistent implementation of these technologies. These findings put the emphasis on the importance of considering the potential benefits of AI and APIs in educational practice, as well as the contextual, technical, and ethical limitations that may influence their impact.

4.4. Ethical, Technical and Regulatory Challenges in the Management of Educational Data

The reviewed studies reveal significant technical and ethical challenges that remain major barriers. One-third of the studies report limitations in duration, sample size, or replicability, which affects the consistency of their conclusions. On a technical level, the dependence on advanced infrastructures and the complexity of certain solutions—such as the use of fine-tuned LLMs or eye-tracking devices—limits their broad applicability. From an ethical standpoint, concerns arise regarding student privacy, authorship of AI-generated content and equitable access. Authors such as Castellanos-Reyes et al. [61] and Okonkwo and Ade-Ibijola [76] emphasize that these tools require continuous human oversight, critical digital literacy and robust regulatory frameworks to prevent bias, misinterpretation, or technological exclusion.

4.5. A Conceptual Framework for Informed, Connected and Adaptive Educational Ecosystems

A conceptual framework that integrates the contributions of the reviewed technologies, structured around five recurring functional dimensions is proposed:
  • Intelligent management of educational information
  • API integration and platform interoperability
  • Personalization through generative AI
  • Automated adaptive feedback
  • Performance prediction and support for decision-making
These dimensions do not operate in isolation but are interrelated in configuring educational ecosystems that are more informed (data-driven decision-making), connected (interoperable and scalable) and adaptive (customizable and responsive in real time). The model acknowledges that educational AI’s potential lies in its technological capabilities and ethical, pedagogical and structural integration—where effective API integration and platform interoperability play key roles within robust and equitable institutional frameworks.

5. Conclusions

This study highlights the growing relevance of AI-powered APIs in the educational field, particularly due to their capacity to enhance data-driven practices and foster more adaptive, efficient and personalized learning environments. In line with the objectives of this study, the main findings are as follows:
AI-based APIs are becoming essential tools for the intelligent management of educational information, enabling the real-time collection, analysis and visualization of data to support pedagogical decision-making.
Despite advances in the integration of APIs into educational platforms, especially in higher education, a significant gap remains regarding their implementation in compulsory education and in contexts with limited digital infrastructure.
Among the reported pedagogical benefits are personalized learning, automated feedback, student engagement tracking and performance prediction. However, these advantages are more effective when accompanied by teacher mediation processes and adequate instructional planning.
Ethical, technical, and regulatory challenges persist, including issues related to student data privacy, the opacity of AI systems and unequal access to technological resources.
While our study reveals a general trend toward positive outcomes, some studies report neutral or limited effects, underscoring the influence of contextual, technical and institutional factors that may diminish the impact of these technologies.
It is important to acknowledge that the current body of research in this field presents notable limitations. Many studies are based on small or non-representative samples, lack longitudinal data or are confined to experimental settings. Furthermore, the concentration of research in technologically advanced countries restricts the generalizability of the findings.
Priorities for future research include:
  • Conducting large-scale longitudinal studies to evaluate the sustained impact of AI and APIs in diverse educational contexts.
  • Designing inclusive frameworks for API integration that take into account infrastructure disparities and the need for pedagogical adaptation.
  • Developing teacher training programs to strengthen digital competence and ethical awareness in the use of AI-based tools.
  • Establishing clear guidelines to ensure transparency, data protection, and appropriate data use.
In conclusion, we believe that AI-driven APIs have the potential to effectively transform teaching practices in educational institutions. However, realizing their benefits on a global scale will require coordinated technical, ethical, and pedagogical efforts aimed at fostering a fairer and more equitable model of education.

Author Contributions

Conceptualization y methodology, D.P.-J. and M.C.G.-A.; software, A.G.S.-Á., Z.P.-C. and C.d.l.Á.P.-L.; validation, D.P.-J., M.C.G.-A. and C.d.l.Á.P.-L.; formal analysis, D.P.-J., M.C.G.-A., A.G.S.-Á., Z.P.-C. and C.d.l.Á.P.-L.; investigation, D.P.-J., M.C.G.-A. and C.d.l.Á.P.-L.; resources, M.C.G.-A., A.G.S.-Á. and Z.P.-C.; data curation, D.P.-J., M.C.G.-A. and C.d.l.Á.P.-L.; writing—original draft preparation, D.P.-J., M.C.G.-A., A.G.S.-Á. and C.d.l.Á.P.-L.; writing—review and editing, D.P.-J., M.C.G.-A., A.G.S.-Á., Z.P.-C. and C.d.l.Á.P.-L.; visualization, Z.P.-C. and C.d.l.Á.P.-L.; supervision, D.P.-J. and M.C.G.-A.; project administration, D.P.-J., A.G.S.-Á., Z.P.-C. and C.d.l.Á.P.-L.; funding acquisition, D.P.-J., M.C.G.-A., A.G.S.-Á., Z.P.-C. and C.d.l.Á.P.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

During the preparation of this study, the authors used Perplexity AI for semantic and content validation purposes. This manuscript was prepared by members of the research group at the University of La Laguna (DISAE).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, C.M.; Wang, W.F. Mining Effective Learning Behaviors in a Web-Based Inquiry Science Environment. J. Sci. Educ. Technol. 2020, 29, 519–535. [Google Scholar] [CrossRef]
  2. Zhang, M. Integrating Deep Learning into Educational Big Data Analytics for Enhanced Intelligent Learning Platforms. Inf. Technol. Control 2024, 53, 1060–1073. [Google Scholar] [CrossRef]
  3. Yilmaz, R.; Yurdugül, H.; Yilmaz, F.G.K.; Şahin, M.; Sulak, S.; Aydin, F.; Tepgeç, M.; Müftüoğlu, C.T.; Oral, Ö. Smart MOOC integrated with intelligent tutoring: A system architecture and framework model proposal. Comput. Educ. Artif. Intell. 2022, 3, 100092. [Google Scholar] [CrossRef]
  4. Gubareva, R.; Lopes, R.P. Virtual Assistants for Learning: A Systematic Literature Review. In Proceedings of the 12th International Conference on Computer Supported Education; Chad Lane, H., Zvacek, S., Uhomoibhi, J., Eds.; SciTePress: Setubal, Portugal, 2020; pp. 97–103. [Google Scholar] [CrossRef]
  5. Chng, E.; Tan, A.L.; Tan, S.C. Examining the Use of Emerging Technologies in Schools: A Review of Artificial Intelligence and Immersive Technologies in STEM Education. J. STEM Educ. Res. 2023, 6, 385–407. [Google Scholar] [CrossRef]
  6. Vázquez-Ingelmo, A.; García-Peñalvo, F.J.; Therón, R. Towards a Technological Ecosystem to Provide Information Dashboards as a Service: A Dynamic Proposal for Supplying Dashboards Adapted to Specific Scenarios. Appl. Sci. 2021, 11, 3249. [Google Scholar] [CrossRef]
  7. Prahani, B.K.; Rizki, I.A.; Jatmiko, B.; Suprapto, N.; Amelia, T. Artificial Intelligence in Education Research During the Last Ten Years: A Review and Bibliometric Study. Int. J. Emerg. Technol. Learn. 2022, 17, 169–188. [Google Scholar] [CrossRef]
  8. Andrey Bernate, J.; Puerto Garavito, S.C. Impacto de la Educación Física en las Competencias Ciudadanas: Una Revisión Bibliométrica. Cienc. Y Deporte 2023, 8, 507–522. Available online: http://scielo.sld.cu/scielo.php?script=sci_abstract&pid=S2223-17732023000300507&lng=en&nrm=iso&tlng=es (accessed on 12 May 2025).
  9. Moreno-Fernández, O.; Gómez-Camacho, A. Impact of the Covid-19 pandemic on teacher tweeting in Spain: Needs, interests, and emotional implications. Educ. XX1 2023, 26, 2. [Google Scholar] [CrossRef]
  10. Pérez Imaicela, R.H. Plataforma Educativa Asistida por el Modelo de Inteligencia Artificial GPT Para el Refuerzo Académico de Estudiantes del Módulo Tecnologías; Desarrollo Web en la Carrera de TI de la FISEI-UTA; Universidad Técnica de Ambato: Ambato, Ecuador, 2024. [Google Scholar]
  11. Alam, A. Possibilities and Apprehensions in the Landscape of Artificial Intelligence in Education. In Proceedings of the 2021 International Conference on Computational Intelligence and Computing Applications (ICCICA), Nagpur, India, 26–27 November 2021; pp. 1–8. Available online: https://ieeexplore.ieee.org/document/9697272 (accessed on 12 May 2025).
  12. Beaulac, C.; Rosenthal, J.S. Predicting university students’ academic success and major using random forests. Res. High. Educ. 2019, 60, 1048–1064. [Google Scholar] [CrossRef]
  13. Lloret, C.M.; González, A.H.; Raboso, D.D. Sistemas y recursos educativos basados en IA que apoyan y evalúan la educación. IA EñTM 2022. [Google Scholar] [CrossRef]
  14. Laupichler, M.C.; Aster, A.; Schirch, J.; Raupach, T. Artificial intelligence literacy in higher and adult education: A scoping literature review. Comput. Educ. Artif. Intell. 2022, 3, 100101. [Google Scholar] [CrossRef]
  15. Srinivasan, V. AI & learning: A preferred future. Comput. Educ. Artif. Intell. 2022, 3, 100062. [Google Scholar] [CrossRef]
  16. Wolters, A.; Arz Von Straussenburg, A.F.; Riehle, D.M. AI Literacy in Adult Education-A Literature Review. In Proceedings of the 57th Hawaii International Conference on System Sciences, Beach Resort, HI, USA, 3–6 January 2024. [Google Scholar] [CrossRef]
  17. Zhai, X. ChatGPT User Experience: Implications for Education. SSRN 2022, 1–10. [Google Scholar] [CrossRef]
  18. Tuomi, I.; Cachia, R.; Villar Onrubia, D. On the Futures of Technology in Education: Emerging Trends and Policy Implications; JRC Science for Policy Report; Publications Office of the European Union: Luxembourg, 2023; Available online: https://publications.jrc.ec.europa.eu/repository/handle/JRC134308 (accessed on 27 May 2025).
  19. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are few-Shot Learners. arXiv 2020. [Google Scholar] [CrossRef]
  20. Bozkurt, A.; Xiao, J.; Lambert, S.; Pazurek, A.; Crompton, H.; Koseoglu, S.; Farrow, R.; Bond, M.; Nerantzi, C.; Honeychurch, S.; et al. Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational landscape. Asian J. Distance Educ. 2023, 18, 53–130. [Google Scholar]
  21. Ng, D.T.K.; Lee, M.; Tan, R.J.Y.; Hu, X.; Downie, J.S.; Chu, S.K.W. A review of AI teaching and learning from 2000 to 2020. Educ. Inf. Technol. 2022, 28, 8445–8501. [Google Scholar] [CrossRef]
  22. García-Peñalvo, F.J.; Llorens-Largo, F.; Vidal, J. La nueva realidad de la educación ante los avances de la inteligencia artificial generativa. RIED-Rev. Iberoam. Educ. Distancia 2024, 27, 9–39. [Google Scholar] [CrossRef]
  23. Berggren, A.; Söderström, T. Virtual assistants in higher education—From research to practice. Educ. Inf. Technol. 2021, 26, 2865–2883. [Google Scholar]
  24. Murtaza, M.; Ahmed, Y.; Shamsi, J.A.; Sherwani, F.; Usman, M. AI-Based Personalized E-Learning Systems: Issues, Challenges, and Solutions. IEEE Access 2022, 10, 81323–81342. Available online: https://ieeexplore.ieee.org/document/9840390 (accessed on 12 May 2025). [CrossRef]
  25. Adako, O.P.; Adeusi, O.C.; Alaba, P.A. Enhancing education for children with ASD: A review of evaluation and measurement in AI tool implementation. Disabil. Rehabil. Assist. Technol. 2025, 1–22. [Google Scholar] [CrossRef]
  26. Almeida, C.; Kalinowski, M.; Feijó, B. A systematic mapping of negative effects of gamification in education/learning systems. In Proceedings of the 47th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Palermo, Italy, 1–3 September 2021. [Google Scholar] [CrossRef]
  27. Encalada Jumbo, F.C.; Cedeño Granda, S.A.; Córdova Macas, L.R.; Granda Guerrero, B.C. Aplicaciones educativas para mejorar el aprendizaje en el aula. Pedagogical Constellations 2025, 4, 92–109. [Google Scholar] [CrossRef]
  28. Castrillón, O.; Sarache, W.; Ruíz, S. Predicción del rendimiento académico por medio de técnicas de inteligencia artificial. Form. Univ. 2020, 13, 93–102. [Google Scholar] [CrossRef]
  29. Gašević, D.; Dawson, S.; Siemens, G. Let’s not forget: Learning analytics are about learning. TechTrends 2015, 59, 64–71. [Google Scholar] [CrossRef]
  30. Gilbert, S.B.; Blessing, S.B.; Guo, E. Authoring effective embedded tutors: An overview of the extensible problem specific tutor (xPST) system. Int. J. Artif. Intell. Educ. 2015, 25, 428–454. [Google Scholar] [CrossRef]
  31. Kinder, A.; Briese, F.J.; Jacobs, M.; Dern, N.; Glodny, N.; Jacobs, S.; Leßmann, S. Effects of adaptive feedback generated by a large language model: A case study in teacher education. Comput. Educ. Artif. Intell. 2025, 8, 100349. [Google Scholar] [CrossRef]
  32. Perrotta, C.; Gulson, K.N.; Williamson, B.; Witzenberger, K. Automation, APIs and the distributed labour of platform pedagogies in Google Classroom. Crit. Stud. Educ. 2021, 62, 97–113. [Google Scholar] [CrossRef]
  33. Sánchez-Vera, M.D.M. La inteligencia artificial como recurso docente: Usos; posibilidades para el profesorado. Educar 2023, 60, 33–47. [Google Scholar] [CrossRef]
  34. Venter, J.; Coetzee, S.A.; Schmulian, A. Exploring the use of artificial intelligence (AI) in the delivery of effective feedback. Assess. Eval. High. Educ. 2024, 50, 516–536. [Google Scholar] [CrossRef]
  35. Williamson, B.; Eynon, R. Historical threads, missing links, and future directions in AI in education. Learn. Media Technol. 2020, 45, 223–235. [Google Scholar] [CrossRef]
  36. Baidoo-Anu, D.; Owusu Ansah, L. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. J. IA 2023, 7, 52–62. [Google Scholar] [CrossRef]
  37. Hartong, S.; Förschler, A. Opening the black box of data-based school monitoring: Data infrastructures, flows and practices in state education agencies. Big Data Soc. 2019, 6, 2053951719853311. [Google Scholar] [CrossRef]
  38. Shibani, A.; Knight, S.; Buckingham Shum, S. Educator perspectives on learning analytics in classroom practice. Internet High. Educ. 2020, 46, 100730. [Google Scholar] [CrossRef]
  39. Tlili, A.; Shehata, B.; Adarkwah, M.A.; Bozkurt, A.; Hickey, D.T.; Huang, R.; Agyemang, B. What If Devil Is My Guard. Angel: ChatGPT A Case Study Using Chatbots education. Smart Learn. Environ. 2023, 10, 15. [Google Scholar] [CrossRef]
  40. García-López, I.M.; González, C.S.G.; Ramírez-Montoya, M.S.; Molina-Espinosa, J.M. Challenges of implementing ChatGPT on education: Systematic literature review. Int. J. Educ. Res. Open 2025, 8, 100401. [Google Scholar] [CrossRef]
  41. Gónzález-González, C. El impacto de la inteligencia artificial en la educación: Transformación de la forma de enseñar y de aprender. Rev. Qurriculum 2023, 2, 51–60. [Google Scholar] [CrossRef]
  42. Holmes, W.; Porayska-Pomsta, K. The Ethics of Artificial Intelligence in Education; Routledge: London, UK, 2023. [Google Scholar]
  43. Liebrenz, M.; Schleifer, R.; Buadze, A.; Bhugra, D.; Smith, A. Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. Lancet Digit. Health 2023, 5, e105–e106. [Google Scholar] [CrossRef]
  44. Xu, W.; Ouyang, F. The application of AI technologies in STEM education: A systematic review from 2011 to 2021. Int. J. STEM Educ. 2022, 9, 59. [Google Scholar] [CrossRef]
  45. Figaredo, D.D.; Reich, J.; Ruipérez-Valiente, J.A. Analítica del aprendizaje; educación basada en datos: Un campo en expansión. RIED-Rev. Iberoam. Educ. Distancia 2020, 23, 33–43. [Google Scholar] [CrossRef]
  46. Ifenthaler, D.; Schumacher, C. Reciprocal issues of artificial and human intelligence in education. J. Res. Technol. Educ. 2023, 55, 1–6. [Google Scholar] [CrossRef]
  47. Ifenthaler, D.; Majumdar, R.; Gorissen, P.; Judge, M.; Mishra, S.; Raffaghelli, J.; Shimada, A. Artificial intelligence in education: Implications for policymakers, researchers, and practitioners. Technol. Knowl. Learn. 2024, 29, 1693–1710. [Google Scholar] [CrossRef]
  48. Lee, U.; Jeong, Y.; Koh, J.; Byun, G.; Lee, Y.; Lee, H.; Kim, H. I see you: Teacher analytics with GPT-4 vision-powered observational assessment. Smart Learn. Environ. 2024, 11, 48. [Google Scholar] [CrossRef]
  49. Santamaría-Bonfil, G.; Escobedo-Briones, G.; Pérez-Ramírez, M.; Arroyo-Figueroa, G. A learning ecosystem for linemen training based on big data components and learning analytics. J. Univers. Comput. Sci. 2019, 25, 541–568. [Google Scholar] [CrossRef]
  50. Zambrano, P.L.; Bazurto, L.M.; Bazurto, G.M.; Llerena, T.R. El desarrollo de interfaces de programación de aplicaciones (APIs) dinamiza el acceso a contenidos en plataformas de educación virtual: The development of application programming interfaces (APIs) streamlines access to content on virtual education platforms. LATAM Rev. Latinoam. Cienc. Soc. Humanidades 2025, 6, 3039–3047. [Google Scholar] [CrossRef]
  51. Castillo, M.E. Impacto de la inteligencia artificial en el proceso de enseñanza aprendizaje en la educación secundaria. LATAM Rev. Latinoam. Cienc. Soc. Humanidades 2023, 4, 515–530. [Google Scholar] [CrossRef]
  52. Uzcátegui Pacheco, R.A.; Ríos Colmenárez, M.J. Inteligencia Artificial para la Educación: Formar en tiempos de incertidumbre para adelantar el futuro. Areté Rev. Digit. Dr. Educ. 2024, 10, 1–21. [Google Scholar] [CrossRef]
  53. Holmes, W.; Porayska-Pomsta, K.; Holstein, K.; Sutherland, E.; Baker, T.; Shum, S.B.; Santos, O.C.; Rodrigo, M.T.; Cukurova, M.; Bittencourt, I.I.; et al. Ethics of AI in Education: Towards a Community-Wide Framework. Int. J. Artif. Intell. Educ. 2022, 32, 504–526. [Google Scholar] [CrossRef]
  54. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Alonso-Fernández, S. Declaración PRISMA 2020: Una guía actualizada para la publicación de revisiones sistemáticas. Rev. Española Cardiol. 2021, 74, 790–799. [Google Scholar] [CrossRef]
  55. Higgins, J.P.T.; Green, S. (Eds.) Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0; The Cochrane Collaboration: London, UK, 2011; Available online: https://handbook-5-1.cochrane.org (accessed on 12 May 2025).
  56. Perreault, W.D., Jr.; Leigh, L.E. Reliability of nominal data based on qualitative judgments. J. Mark. Res. 1989, 26, 135–148. [Google Scholar] [CrossRef]
  57. Peyton, K.; Unnikrishnan, S.; Mulligan, B. A review of university chatbots for student support: FAQs and beyond. Discov. Educ. 2025, 4, 21. [Google Scholar] [CrossRef]
  58. Huang, Y.M.; Chen, P.H.; Lee, H.Y.; Sandnes, F.E.; Wu, T.T. ChatGPT-Enhanced Mobile Instant Messaging in Online Learning: Effects on Student Outcomes and Perceptions. Comput. Hum. Behav. 2025, 168, 108659. [Google Scholar] [CrossRef]
  59. Yu, S.; Androsov, A.; Yan, H. Exploring the prospects of multimodal large language models for Automated Emotion Recognition in education: Insights from Gemini. Comput. Educ. 2025, 232, 105307. [Google Scholar] [CrossRef]
  60. Caccavale, F.; Gargalo, C.L.; Kager, J.; Larsen, S.; Gernaey, K.V.; Krühne, U. ChatGMP: Un caso de chatbots de IA en la formación en ingeniería química para la automatización de tareas repetitivas. Comput. Educ. Artif. Intell. 2025, 8, 100354. [Google Scholar] [CrossRef]
  61. Castellanos-Reyes, D.; Olesova, L.; Sadaf, A. Transforming online learning research: Leveraging GPT large language models for automated content analysis of cognitive presence. Internet High. Educ. 2025, 65, 101001. [Google Scholar] [CrossRef]
  62. Bernasconi, E.; Redavid, D.; Ferilli, S. Enhancing Personalised Learning with a Context-Aware Intelligent Question-Answering System and Automated Frequently Asked Question Generation. Electronics 2025, 14, 1481. [Google Scholar] [CrossRef]
  63. Gharbi, M.; Mohtadi, M.T. Personalizing mooc assessments with ai: Fine-tuning chatgpt for scalable learning. Int. J. Tech. Phys. Probl. Eng. 2025, 17, 192–203. [Google Scholar]
  64. Jusoh, S.; Kadir, R.A. Chatbot in education: Trends, personalisation, and techniques. Multimed. Tools Appl. 2025, 1–24. [Google Scholar] [CrossRef]
  65. Takii, K.; Flanagan, B.; Li, H.; Yang, Y.; Koike, K.; Ogata, H. Explainable eBook recommendation for extensive reading in K-12 EFL learning. Res. Pract. Technol. Enhanc. Learn. 2024, 20, 027. [Google Scholar] [CrossRef]
  66. Bagci, M.; Mehler, A.; Abrami, G.; Schrottenbacher, P.; Spiekermann, C.; Konca, M.; Engel, J. Simulation-Based Learning in Virtual Reality: Three Use Cases from Social Science and Technological Foundations in Terms of Va. Si. Li-Lab. Technol. Knowl. Learn. 2025, 1–40. [Google Scholar] [CrossRef]
  67. Sajja, R.; Sermet, Y.; Demir, I.; Pursnani, V. AI-Assisted Educational Framework for Floodplain Manager Certification: Enhancing Vocational Education and Training Through Personalized Learning. IEEE Access 2025, 13, 42401–42413. [Google Scholar] [CrossRef]
  68. Farhood, H.; Joudah, I.; Beheshti, A.; Muller, S. Evaluating and enhancing artificial intelligence models for predicting student learning outcomes. Informatics 2024, 11, 46. [Google Scholar] [CrossRef]
  69. Naatonis, R.N.; Rusijono, R.; Jannah, M.; Malahina, E.A.U. Evaluation of Problem Based Gamification Learning (PBGL) Model on Critical Thinking Ability with Artificial Intelligence Approach Integrated with ChatGPT API: An Experimental Study. Qubahan Acad. J. 2024, 4, 485–520. [Google Scholar] [CrossRef]
  70. Pesovski, I.; Santos, R.; Henriques, R.; Trajkovik, V. Generative AI for Customizable Learning Experiences. Sustainability 2024, 16, 3034. [Google Scholar] [CrossRef]
  71. Wang, L.; Lou, Y.; Li, X.; Xiang, Y.; Jiang, T.; Che, Y.; Ye, C. GlyphGenius: Unleashing the potential of AIGC in Chinese character learning. IEEE Access 2024, 12, 136420–136434. [Google Scholar] [CrossRef]
  72. Valverde-Rebaza, J.; González, A.; Navarro-Hinojosa, O.; Noguez, J. Advanced large language models and visualization tools for data analytics learning. Front. Educ. 2024, 9, 1418006. [Google Scholar] [CrossRef]
  73. Garefalakis, M.; Kamarianakis, Z.; Panagiotakis, S. Towards a supervised remote laboratory platform for teaching microcontroller programming. Information 2024, 15, 209. [Google Scholar] [CrossRef]
  74. Hervás, R.; Francisco, V.; Concepción, E.; Sevilla, A.F.; Méndez, G. Creating an API Ecosystem for Assistive Technologies Oriented to Cognitive Disabilities. IEEE Access 2024, 12, 163224–163240. [Google Scholar] [CrossRef]
  75. Santhosh, J.; Dengel, A.; Ishimaru, S. Gaze-Driven Adaptive Learning System with ChatGPT-Generated Summaries. IEEE Access 2024, 12, 173714–173733. [Google Scholar] [CrossRef]
  76. Okonkwo, C.W.; Ade-Ibijola, A. Python-bot: A chatbot for teaching python programming. Eng. Lett. 2020, 29, 25. [Google Scholar]
Figure 1. Flow diagram of the phases of the review.
Figure 1. Flow diagram of the phases of the review.
Information 16 00540 g001
Figure 2. Distribution of studies by year.
Figure 2. Distribution of studies by year.
Information 16 00540 g002
Figure 3. Distribution of studies by country.
Figure 3. Distribution of studies by country.
Information 16 00540 g003
Figure 4. Distribution of studies by objective scope.
Figure 4. Distribution of studies by objective scope.
Information 16 00540 g004
Figure 5. Distribution of studies by educational level.
Figure 5. Distribution of studies by educational level.
Information 16 00540 g005
Figure 6. Distribution of studies by type of technology used.
Figure 6. Distribution of studies by type of technology used.
Information 16 00540 g006
Figure 7. Distribution of studies by functions of applied APIs and AI.
Figure 7. Distribution of studies by functions of applied APIs and AI.
Information 16 00540 g007
Figure 8. Distribution of studies of observed impact.
Figure 8. Distribution of studies of observed impact.
Information 16 00540 g008
Figure 9. Distribution of studies by methodological limitations.
Figure 9. Distribution of studies by methodological limitations.
Information 16 00540 g009
Figure 10. Distribution of studies by key dimentions.
Figure 10. Distribution of studies by key dimentions.
Information 16 00540 g010
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
a. Documents published between 2013–2025a. Documents published before 2013
b. Documents in Spanish or Englishb. Documents in languages other than Spanish or English
c. Peer-reviewed studiesc. Publications not peer-reviewed or lacking academic validation
d. Studies addressing the use of APIs with AI components in educational or training contextsd. Studies that do not address the combination of APIs and AI or are not related to educational contexts
e. Studies focused on educational information management, learning personalization, or learning analytics
Table 2. Truncated terms and search equation used for each database.
Table 2. Truncated terms and search equation used for each database.
DatabaseTruncated Terms/Search Equation
ScopusTITLE-ABS-KEY((“artificial intelligence” OR “inteligencia artificial”) AND (“application programming interface” OR “API”) AND (“educational technology” OR “learning environment” OR “digital education” OR “plataformas educativas”) AND (“information management” OR “learning analytics” OR “student data” OR “educational data”) AND (“personalization” OR “integration” OR “interoperability” OR “decision making”))
WoSTS = (“artificial intelligence” OR “AI”) AND TS = (“API” OR “application programming interface”) AND TS = (“education” OR “educational technology” OR “learning analytics”)
IEEE Xplore“artificial intelligence” AND API AND education
Dialnet“inteligencia artificial” “educación” “plataforma digital” y “inteligencia artificial” “educación” “plataforma educativa”
Table 3. Analyzed documents.
Table 3. Analyzed documents.
Reference Objectives Sample Country Methodology Results
Peyton et al. [57]To review the use of chatbots in universities for student support beyond the classroom, evaluating implemented models, NLP technologies and platforms.8 university chatbot implementationsInternational (cases from universities in the US, UK, Australia and Asia; comparison with non-educational sectors)Narrative review with targeted search on Google Scholar. Descriptive analysis of chatbot architectures (retrieval, AIML, GPT, etc.) and tools such as QnA Maker and Dialogflow.Predominant use of FAQ-based models. Limited adoption of generative AI due to lack of educational datasets and low formal evaluation. Opportunities for improvement identified through RAG, LangChain and APIs such as OpenAI to enrich response systems and personalization.
Huang et al. [58]To evaluate the impact of an AI messaging tool (ChatMIM) on student engagement, the development of higher-order thinking skills and student perceptions in online learning environments.63 postgraduate students (33 in the experimental group, 30 in the control group) enrolled in an advanced digital learning courseTaiwanExperimental study with a pretest/posttest design and a mixed-methods approach. Combined questionnaires, log analysis, interviews and the use of ChatGPT (GPT-3.5) as a support tool.Significant improvements in engagement (behavioral, cognitive and emotional), critical thinking, problem-solving and creativity. Students positively valued the system’s usefulness and its educational potential due to reduced cognitive load and automated feedback.
Yu et al.
[59]
To evaluate the performance of the Gemini model (Multimodal LLM) in automatic emotion recognition (AER) tasks based on images in educational contexts, comparing accuracy, error patterns and emotional inference mechanisms.2.627 images extracted from five datasets: CK+, FER-2013, RAF-DB (basic emotions) and OL-SFED, DAiSEE (academic emotions)ChinaQuantitative analysis of model performance using precision metrics (F1-score, recall). Coding of emotional inference patterns.Gemini was effective in recognizing basic emotions (e.g., happiness, surprise), but less accurate with academic emotions (e.g., confusion, distraction). Preprocessing improved recognition quality. The model showed a reliance on facial features with insufficient contextual understanding.
Caccavale et al. [60]To develop and implement ChatGMP, an LLM-based chatbot designed to conduct simulated audits in a postgraduate course, comparing its effectiveness to that of human instructors and evaluating its impact on the learning experience.21 master’s student groups (~120 students). 3 groups conducted the audit exercise using ChatGMP.DenmarkCase study with a mixed-methods approach. Technical evaluation of the model (FLAN-T5), comparative testing with instructors, student surveys and analysis of response quality.Similar experience and perceived quality across groups. ChatGMP automated repetitive tasks successfully and was well received, though it showed limitations in some responses and documents.
Kinder et al. [31]To compare the impact of adaptive feedback generated by ChatGPT versus static expert feedback on students’ written performance, justification quality and perception of feedback in pre-service teacher education.269 master’s students in teacher training, randomly assigned to two groups: adaptive feedback (n = 132) and static feedback (n = 137).GermanyRandomized controlled trial. Evaluation of written performance before and after receiving feedback. Quantitative analysis (ANCOVA, logistic regression, Wilcoxon) and perception measures.Adaptive feedback improved justification quality and increased word count, but did not affect decision accuracy. It was perceived as more useful and engaging and students spent more time processing it.
Castellanos-Reyes et al. [61]To explore the reliability and efficiency of GPT models in automating the analysis of cognitive presence (CP) in online discussions using an adapted LLM-based coding approach.293 paragraphs extracted from 180 forum messages in a postgraduate instructional design course.United StatesComparative study of AI-assisted content coding (LACA) versus human coding, using GPT models with one-shot and few-shot prompts. Evaluation of reliability (Cohen’s k), time and cost.The fine-tuned model using a one-shot prompt achieved moderate agreement with human coders (k = 0.59). High levels of accuracy were achieved in the integration phase, but low reliability was observed in phases such as resolution. The LACA approach significantly reduced analysis time and cost, though it requires prior knowledge of data processing and prompt engineering.
Bernasconi et al. [62]To develop and implement an intelligent question-answering system (IQAS) with contextual sensitivity, along with an automatic FAQ generation tool, aimed at enhancing personalized learning.2.000 questions (1.200 factual and 800 inferential) evaluated using performance metrics. Students and experts participated in validation and feedback testing.ItalyHybrid design combining rule-based systems with transformer models (BERT, DistilBERT, BART), reasoning through knowledge graphs and automatic FAQ extraction. Quantitative evaluation (accuracy, F1) and qualitative assessment (usability, cognitive load, explainability).The IQAS achieved 95% accuracy on factual questions and an F1 score of 0.85 on inferential ones. 80% of the generated FAQs were considered relevant by experts. Integration with the LMS proved feasible. User experience was rated positively (78% positive feedback, 4.1/5 in explainability).
Gharbi, M., & Mohtadi, M. T. [63]To explore how personalized assessment in MOOCs can be achieved through the fine-tuning of ChatGPT to generate individualized feedback, adaptive quizzes and scalable learning pathways.Experiments conducted across multiple MOOCs (mathematics, programming, English and data science), with control and experimental groups. Comparative data on performance, feedback and satisfaction.MoroccoDevelopment and validation of a system based on fine-tuned ChatGPT. Evaluation using quantitative metrics (accuracy, F1 score, response time) and qualitative indicators (engagement, cognitive load, satisfaction). Comparison between AI-supported and non-AI-supported groups.Groups that received adaptive feedback through ChatGPT improved performance by 12% to 25%. Error rates decreased by up to 50% and student satisfaction exceeded 89%. The system proved effective in terms of scalability, personalization and real-time response, though ethical and technical challenges were noted.
Jusoh, S., & Abdul Kadir, R.
[64]
To conduct a systematic review on the current state, personalization approaches and development techniques of chatbots in education, with special emphasis on their potential to enhance teaching and learning.Review of 720 identified articles, of which 116 met quality criteria for final analysis (published between 2018 and 2024).MalaysiaSystematic review following PRISMA guidelines. Search conducted in IEEE Xplore and ACM Digital Library. Application of inclusion, exclusion and quality assessment criteria with weighting for novelty, content, analysis and results.The review identified trends in the use of chatbots as educational support tools, highlighting their capacity for personalization and usefulness in virtual environments. Development techniques such as programming, AIML and no-code platforms were analyzed. Challenges were noted, including limitations in natural language understanding, technological dependency and ethical concerns.
Takii et al.
[65]
To develop and evaluate an explainable recommendation system for digital books in extensive reading programs in English as a Foreign Language (EFL) at the secondary level, tailored to students’ difficulty preferences.240 Japanese secondary school students (120 first-year and 120 s-year).JapanMixed-methods design: algorithm based on TF-IDF and CEFR-J lexical profiles to estimate material difficulty and student preferences. Technical evaluation of the system, along with usage and perception analysis through log data and a TAM questionnaire (n = 203).The system accurately estimated the difficulty of simpler texts. Although it did not improve overall performance or motivation, it was well received by already motivated students. A correlation was found between system use and increased reading activity. Improvements to the explanation of recommendations are suggested to enhance persuasiveness.
Bagci et al.
[66]
To develop and evaluate the Va.Si.Li-Lab system for simulating social learning scenarios in virtual reality, exploring its ability to predict communicative contexts through multimodal data analysis.9 simulations across 6 sub-scenarios (school education, organizational pedagogy and social work), with 3 participants per simulation (total: 27).GermanyExperimental design based on VR simulations with multimodal data collection (voice, gaze, movement, gestures). Analysis using multimodal interaction graphs, SVM classification and evolutionary feature selection.Multimodal interaction patterns accurately predicted the type of educational scenario. Optimal F1 scores of 1.0 were achieved in selected configurations. The system demonstrated potential for identifying complex educational contexts in real time and fostering critical reflection in training environments.
Sajja et al.
[67]
To develop and evaluate an AI-assisted educational tool for preparing for the Floodplain Manager (FPM) certification exam, offering personalized learning, adaptive quizzes and automated feedback.Evaluation with 145 open-ended questions and 82 multiple-choice questions drawn from real certification preparation materials.United StatesDevelopment of a platform based on ChatGPT-4o and RAG architecture. Evaluation through answer comparison, cosine similarity analysis and accuracy in closed-ended questions. Validation with experts and presentation at an international conference.The system achieved 91.03% accuracy on open-ended questions and 95.12% on multiple-choice items. Experts rated it positively for applicability, scalability and personalization, although challenges were noted regarding integration of local data, deeper semantic evaluation and visualization of geospatial data.
Zhang
[2]
To apply deep learning techniques to analyze large-scale educational data and predict students’ academic performance, incorporating variables such as engagement, resource usage and parental presence.480 student records from the LMS platform Kalboard 360, with 16 attributes related to behavior, performance and family involvement.ChinaPredictive model based on LSTM networks. Elastic Net was used for feature selection and the model was trained using regularization techniques, hyperparameter tuning and cross-validation.The LSTM model achieved 99% accuracy in predicting academic performance (high, medium and low categories), significantly outperforming traditional models (Random Forest, SVM, KNN). The predictive value of variables such as participation in announcements, resource use and parental involvement was confirmed.
Lee et al.
[48]
To develop and evaluate the VidAAS system based on GPT-4V for automated classroom observation, aimed at enhancing teachers’ reflective practice through real-time teaching analytics.5 primary school teachers with experience in educational use of AI, selected as usability test experts.South KoreaExploratory study with six phases: theoretical review, design and implementation of VidAAS, usability testing, qualitative interviews, thematic analysis and SWOT analysis. GPT-4V, Whisper and LangChain were employed to integrate computer vision and text analysis.VidAAS demonstrated high accuracy in evaluating psychomotor domains, detailed explanations and potential to support both reflection-in-action and reflection-on-action. Identified limitations included latency, scalability and the assessment of affective domains. Opportunities were noted to improve teacher training, personalize feedback and diversify assessment strategies, although ethical challenges related to privacy and evaluative authority were also highlighted.
Farhood et al. [68]To compare and optimize ten machine learning (ML) and deep learning (DL) models for predicting student academic performance, also evaluating the impact of feature selection using Lasso and hyperparameter tuning.Two public datasets:
395 Portuguese students (mathematics, secondary education)
480 students from 14 countries (primary to secondary education, various subjects)
AustraliaComparison of 7 ML models and 3 DL models (Random Forest, XGBoost, SVM, CNN, FFNN, GBNN, etc.). Evaluation using cross-validation and holdout. Application of Lasso regularization and Bayesian hyperparameter optimization.The most accurate models were Random Forest and XGBoost (ML) and GBNN (DL). Lasso improved performance in several cases (e.g., logistic regression, CNN). Hyperparameter tuning increased model adaptability and accuracy. The study offers practical guidance for selecting predictive models applicable to real educational settings.
Naatonis & Acevedo
[69]
To evaluate the impact of a personalized learning model based on the ChatGPT API on the academic performance and motivation of technical high school students in social sciences.46 s-year technical high school students, divided into an experimental group (n = 23) and a control group (n = 23).ArgentinaQuasi-experimental design with pretest and posttest. Evaluation of academic performance, motivation and learning perception. The ChatGPT API was integrated to provide personalized feedback during text study and practical activities.The experimental group showed significant improvements in reading comprehension, problem-solving and intrinsic motivation. Students positively valued the interaction with AI, highlighting immediate feedback, natural language use and personalization. Limitations included response time and the need for teacher supervision to prevent excessive dependency.
Pesovski et al. [70]To design, implement and evaluate a GPT-4-based system for generating personalized educational materials in three styles (traditional teacher, Batman and Wednesday Addams), automatically integrated into an LMS.20 first-year software engineering students (average age: 20). Longitudinal study with surveys administered at the end of the course and six months later.North Macedonia and PortugalExploratory study with integration of the GPT-4 API into the LMS. Automatic generation of content in three styles, tracking of interaction time, two questionnaires (immediate and 6-month follow-up) and mixed-method analysis of use, preferences and performance.Students primarily used the traditional style content, although the versions with fictional characters led to increased total study time. The system was positively rated in terms of accessibility, personalization and motivation—particularly among initially low-performing students. A long-term preference shift toward the traditional style was observed. The study highlights the technical and pedagogical feasibility of integrating generative AI to personalize learning via the LMS.
Venter et al. [34]To design, implement and evaluate a web application integrated with GPT-4 to generate automated feedback on written assignments in a large university accounting course, based on the effective feedback principles of Nicol and Macfarlane-Dick.75 written assignments from second-year accounting students, selected from five different assessments.South AfricaExploratory study with iterative development of a no-code application (Bubble.io), prompt design and validation and feedback quality analysis using a rubric based on seven principles of effective feedback. Evaluation conducted at three levels (adherence: none, partial, or full).GPT-4-generated feedback achieved an average adherence score of 2.67 out of 3. Strengths were noted in motivation, dialogue and closing feedback loops, while weaknesses appeared in content accuracy and promotion of self-reflection. Teacher supervision was recommended to ensure quality and avoid ethical risks or critical errors.
Wang et al. [71]To develop GlyphGenius, a platform based on AIGC and visual redrawing of Chinese characters through a multi-stage generative model, aimed at improving semantic learning of characters among non-native learners.135 participants (121 Chinese language beginners) in a semantic recognition questionnaire, plus 21 volunteers in touchscreen usability tests.ChinaDevelopment of an interactive system with a graphical UI and customized generative modules (Stable Diffusion + LoRA + handwriting recognition). Technical evaluation (FID, CLIP, SSIM, OCR), quantitative A/B testing and satisfaction analysis.The experimental group using redrawn characters improved semantic recognition by 12.76% compared to the control group. The average satisfaction score was 4.24/5. The multi-stage model enhanced structural and aesthetic recognizability of characters, though limitations were noted with unclear prompts and non-pictographic characters.
Valverde-Rebaza et al. [72]To compare the effectiveness of three approaches (traditional programming, ChatGPT and LIDA + GPT) for developing data analytics projects among students and professionals without advanced computational training.59 participants (43 students and 16 professionals from various disciplines), with different levels of experience in programming and data analytics.MexicoCase study conducted through practical sessions with three sequential activities: traditional development, development assisted by ChatGPT and development using LIDA + GPT integration via API. Data were collected through questionnaires, logs and comparative analysis of time, ease of use, accuracy and result perception.The LIDA + GPT approach was rated the most accurate and appropriate, although it involved higher initial technical complexity. ChatGPT stood out for its ease and speed of use. Both approaches outperformed traditional programming in perceived efficiency, especially among professionals. The study concludes that integrating generative tools can significantly enhance data analytics learning, provided technical and training barriers are addressed.
Garefalakis et al. [73]To develop and evaluate a remote laboratory platform (HMU-RLP) for teaching Arduino microcontroller programming, integrating supervision and automated assessment systems using AI and xAPI.No specific student sample reported. The study focuses on technical design, implementation and functional comparison with other RL platforms.GreeceTechnical design and comparative evaluation. Implementation of three types of automated assessment (user actions, shadow microcontroller control and AI-based evaluation). Use of xAPI to log learning analytics and enable personalization.The HMU-RLP platform automates code evaluation, monitors user interactions and allows for hardware-level analysis. It uses xAPI to track learning and offer adaptive learning pathways. It overcomes limitations of other RLs by enabling real-time evaluation and differentiated teacher control. The platform is proposed as a model for supervised and ubiquitous learning.
Hervás et al. [74]To design and evaluate a modular API ecosystem to support the development of assistive technologies for individuals with cognitive disabilities, promoting interoperability, personalization and service reusability.6 applications for cognitive support developed using this ecosystem: PICTAR, LeeFácil, AprendeFácil, ReadIt, Pict2Text and EmoTraductor. Validation with users, special education experts and technical integration tests.SpainArchitectural design based on microservices and open APIs (REST and GraphQL), with real use cases. Qualitative evaluation of adaptability, scalability and development efficiency according to accessibility and reusability criteria.The ecosystem enabled integration of features such as text simplification, pictograms and emotional analysis across multiple applications. It was noted for its personalization capabilities, low maintenance cost and high potential for collaborative innovation. A centralized platform is planned to enhance API management in terms of security, scalability and traceability.
Santhosh et al. [75]To develop and evaluate an adaptive learning system based on real-time eye tracking, which generates personalized summaries using ChatGPT when low student engagement is detected.22 university students (11 experimental, 11 control), all with advanced English proficiency.Germany and JapanMixed-method design using eye tracking (Tobii 4C, 90 Hz), InceptionTime and Transformer models for engagement prediction. The experimental group received summaries generated by ChatGPT when low attention was detected. Evaluation included questionnaires, objective and subjective metrics and statistical validation.The experimental group demonstrated higher engagement, comprehension and confidence (p < 0.01). The system achieved 68.15% accuracy in engagement prediction. Adaptive interventions stabilized visual patterns, reduced cognitive load and improved focus on content. The study highlights the feasibility of integrating gaze data and LLMs to personalize learning experiences in real time.
Chen & Wang [1]To identify the most effective learning behaviors in a web-based scientific inquiry environment through process data analysis, sequential pattern mining and lag-sequential analysis supported by xAPI.48 seventh-grade secondary education students. Divided into high- and low-performing groups for comparative analysis.TaiwanPre-experimental design. Process data recorded via xAPI in CWISE. Analysis included correlations, sequential pattern mining and lag-sequential analysis to compare behavioral sequences between groups with different performance levels.Inquiry competence and time spent on the buoyancy simulation significantly predicted performance. High-performing students revised their hypotheses after experiments, while low-performing students analyzed without experimenting. The analytical tools enabled identification of effective behavioral sequences to guide instructional interventions.
Okonkwo & Ade-Ibijola [76]To design and evaluate Python-Bot, an educational chatbot developed using the SnatchBot platform to support beginner students in understanding fundamental Python programming concepts.205 university students (mostly first-year) enrolled in an introductory programming course at the University of Johannesburg.South AfricaNo-code chatbot design with a conversational interface. Evaluation through a perception survey covering ease of use, response accuracy, usefulness of feedback and learning improvement. Integrated features included algorithm explanations, code examples and tutorial scheduling.81.4% of students reported that Python-Bot facilitated their programming learning. Over 99% validated the accuracy of its responses and 73.7% found it easy to use. Benefits were noted in terms of accessibility, personalization and support in remote learning contexts (COVID-19). Improvements were suggested for advanced syntax support and teacher integration.
Santamaría-Bonfil et al. [49]To develop a learning ecosystem for lineman training based on xAPI, Big Data and learning analytics, capable of integrating both legacy and new data to personalize learning pathways.43 maintenance procedures from the LMT training program and 9 graduate students involved in a proof-of-concept study on self-records (SR).MexicoTechnical and experimental design. Development of a domain model using text mining (BoW, DTM, clustering) and exploratory analysis of informal and emotional self-records using xAPI and DEQ. Proof of concept conducted with bookmarklets and LRS.The hierarchical model enabled the definition of personalized learning pathways based on procedure similarity. Informal self-records showed potential as a source of support content, while emotional records served as indicators of student engagement. Technical, ethical and standardization challenges were identified in using xAPI with emotional SRs, but the feasibility of the approach for personalized training in technical contexts was validated.
Gilbert et al. [30]To describe the architecture, applications and evaluations of the xPST system, an authoring tool designed to create intelligent tutors integrated into third-party software without requiring advanced programming or cognitive science knowledge.xPST was evaluated with 75 participants: 49 statistics students and 16 content authors across various educational and development environments.United StatesSeries of exploratory and comparative studies (both quantitative and qualitative): implementation of tutors in real software environments (Paint.NET v.3.5, CAPE v7, Torque 3D v. 3.5, Firefox v.29), analysis of learning outcomes, usability, efficiency, user perception and authoring time.xPST enables the creation of effective tutors at low training cost, making it accessible to non-programmers. In several contexts, xPST-generated tutors improved student performance and satisfaction. The tool was positively rated for ease of use, though it showed limitations in flexibility and generalizability compared to other systems such as CTAT.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pérez-Jorge, D.; González-Afonso, M.C.; Santos-Álvarez, A.G.; Plasencia-Carballo, Z.; Perdomo-López, C.d.l.Á. The Impact of AI-Driven Application Programming Interfaces (APIs) on Educational Information Management. Information 2025, 16, 540. https://doi.org/10.3390/info16070540

AMA Style

Pérez-Jorge D, González-Afonso MC, Santos-Álvarez AG, Plasencia-Carballo Z, Perdomo-López CdlÁ. The Impact of AI-Driven Application Programming Interfaces (APIs) on Educational Information Management. Information. 2025; 16(7):540. https://doi.org/10.3390/info16070540

Chicago/Turabian Style

Pérez-Jorge, David, Miriam Catalina González-Afonso, Anthea Gara Santos-Álvarez, Zeus Plasencia-Carballo, and Carmen de los Ángeles Perdomo-López. 2025. "The Impact of AI-Driven Application Programming Interfaces (APIs) on Educational Information Management" Information 16, no. 7: 540. https://doi.org/10.3390/info16070540

APA Style

Pérez-Jorge, D., González-Afonso, M. C., Santos-Álvarez, A. G., Plasencia-Carballo, Z., & Perdomo-López, C. d. l. Á. (2025). The Impact of AI-Driven Application Programming Interfaces (APIs) on Educational Information Management. Information, 16(7), 540. https://doi.org/10.3390/info16070540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop