Abstract
The integration of generative artificial intelligence (GenAI) into educational contexts has sparked significant interest in its potential to enhance learning engagement. However, empirical findings remain inconsistent, and a systematic synthesis of its effects across distinct engagement dimensions is lacking. This meta-analysis synthesizes evidence from 31 empirical studies (91 effect sizes), with the core aim of investigating the relationship between learners’ GenAI use and learning engagement, alongside the role of key moderating variables. Results indicate that GenAI exerts a significant positive effect on overall learning engagement, demonstrating the strongest impact on cognitive engagement, followed by affective and behavioral engagement. Moderator analyses reveal nuanced, sub-dimension-specific effects: the positive influence is most pronounced in higher education, shows significant benefits across all three sub-dimensions in basic education, and is non-significant in continuing education; medium-duration interventions (1 day–1 month) yield the largest effects; and teacher intervention significantly amplifies gains in cognitive engagement. Both learning mode and interaction approach exert significant positive effects on overall learning engagement, while their impacts on the sub-dimensions did not show significant heterogeneity. This study enriches the theoretical system of educational technology integration by clarifying the directional effect and moderating mechanisms of GenAI on learning engagement and provides a nuanced evidence base for designing context-sensitive implementations, offering valuable insights for fostering personalized and engaging learning experiences.
1. Introduction
The rapid integration of generative artificial intelligence (GenAI), particularly tools like ChatGPT and other large language models, into educational contexts is driving a paradigm shift. Unlike traditional technologies that merely deliver static content, GenAI enables dynamic and personalized learning experiences through capabilities such as context-aware content generation and real-time adaptive feedback (). These advancements position GenAI as a promising catalyst for addressing one of education’s most enduring challenges: fostering deep, sustained learning engagement ().
This dynamic nature sets GenAI apart from earlier educational technologies. Its key distinction is a unique capacity to holistically and dynamically scaffold learning engagement. This support spans the cognitive, behavioral, and affective dimensions of learning. Unlike traditional tools such as static learning management systems, pre-recorded lectures, or basic homework aids, GenAI moves beyond one-way information delivery (). It can tailor problem-solving prompts to a learner’s cognitive level, offer guidance to sustain task persistence, and deliver supportive interactions to reduce learning anxiety (). In hybrid learning contexts, where traditional tools often reinforce passive consumption, GenAI actively transforms engagement. It fosters critical thinking through contextualized questions and maintains behavioral involvement with dynamic task adjustments (). Consequently, the transformative nature of GenAI underscores the urgency of investigating its overall and dimension-specific effects on learning engagement.
Precisely because understanding GenAI’s impact on engagement is so pressing, the central importance of this outcome variable warrants emphasis. Learning engagement has a crucial connection to educational results, such as learning achievement (), persistence () and satisfaction (), so it is regarded as an essential role in education. Scholars view it as the ‘educational bottom line’ and the ‘holy grail of learning’ (). Despite its well-documented importance, fostering optimal engagement across these interrelated domains remains a pressing challenge in educational practice. Notably, the interactive and adaptive nature of GenAI provides a promising avenue to address diverse learner needs and bridge this longstanding gap ().
However, the burgeoning empirical literature on the impact of GenAI on learning engagement presents inconsistent and often contradictory findings. Some studies report substantial positive effects (), particularly on cognitive development and affective attitudes, attributing these gains to GenAI’s capacity to scaffold complex tasks and enhance motivation (; ). Conversely, partial research sounds a note of caution, highlighting potential negative consequences such as the risk of over-reliance hindering critical thinking or the presence of algorithmic biases causing confusion (; ). Further complicating the picture, a number of studies find no statistically significant effects (). These discrepancies create considerable ambiguity for educators and policymakers and point to critical limitations in the current literature itself. Most existing studies are confined to specific contexts, such as higher education, or focus narrowly on a single dimension of engagement, lacking a comprehensive synthesis (; ). More importantly, there is a scarcity of research that systematically investigates the moderating factors that may explain the varying effectiveness of GenAI, leaving questions about the optimal conditions for its application largely unanswered.
Notably, learning engagement is a multifaceted construct that involves the students’ cognitive, behavioral and emotional efforts and contributions in the learning process (). As GenAI finds increasing application in educational contexts, scholars have explored how this technology influences learning engagement (; ). Yet, when selecting dependent variables, the majority of these investigations fail to encompass diverse dimensions. Most existing studies conceptualize GenAI’s impact on learning engagement as a single, undifferentiated construct, and they lack differentiated analysis across key sub-dimensions (). This approach fails to clarify variations in both the intensity and mechanisms of GenAI’s influence across these dimensions, and this oversight hinders the creation of targeted strategies to use GenAI for addressing engagement challenges in education, providing theoretical support and practical guidance for the precise implementation of GenAI in educational settings.
To address these gaps, this study employs a meta-analytic approach to quantitatively synthesize the existing empirical evidence. As a powerful method for integrating findings across independent studies, meta-analysis can provide a robust estimate of the overall effect size and illuminate the role of key contextual moderators (; ). Specifically, this research is guided by three questions:
- Q1: What is the overall effect of learners’ use of GenAI on learning engagement, and how does this effect manifest across its cognitive, behavioral, and affective sub-dimensions?
- Q2: To what extent is this relationship influenced by key moderating variables, including educational stage, duration, learning mode, interaction approach, and the presence of teacher intervention?
- Q3: How do these moderators specifically affect each sub-dimension of learning engagement?
By addressing these questions, this study aims to move beyond the query of whether GenAI works toward a deep understanding of differences in the impact of various sub-dimensions and the conditions that maximize its effectiveness. The findings are expected to provide clarity for researchers, provide evidence-based guidance for educators, and offer actionable insights for developers of educational technologies.
2. Literature Review
2.1. Learning Engagement
Learning engagement is widely recognized as a pivotal indicator for assessing educational quality, reflecting both the depth of learners’ investment and the extent of their participation in instructional activities. () was among the first to systematically conceptualize student engagement, defining it through two core dimensions: the time and effort students devote to educational activities and the supportive resources and mechanisms provided by institutions to facilitate such engagement.
Building on this foundation, () developed an influential three-dimensional framework, categorizing learning engagement into behavioral, emotional, and cognitive aspects. This model has provided a robust theoretical basis for subsequent research. Specifically, cognitive engagement involves the mental effort and strategic approaches learners employ, such as conceptual integration, deep information processing, and problem-solving (). Behavioral engagement refers to observable actions during learning, such as attentiveness in class, active questioning, participation in discussions, and task persistence (). Emotional engagement encompasses learners’ affective experiences in the educational environment, including emotions like interest, satisfaction, anxiety, and boredom ().
As theoretical understanding evolved, () proposed expanding the framework to include a fourth dimension: social engagement. They argued that interactive behaviors and social networks formed in collaborative learning contexts should be considered integral to engagement. However, empirical studies have shown that social engagement often overlaps significantly with behavioral engagement in terms of measurement, leading to considerable coupling in statistical models that is difficult to disentangle (). Thus, the three dimensions of cognitive, behavioral, and emotional engagement constitute a widely adopted analytical framework.
When considering learning engagement in digital environments, particularly with generative AI, a shift from traditional conceptions becomes evident. Traditional engagement is largely a socially constructed process, facilitated through direct human interaction in physical classrooms (). In contrast, digital learning engagement with GenAI is characterized by a human–computer collaborative process. This shift redefines the dimensions of engagement: cognitive engagement is driven by AI-powered, adaptive scaffolding; behavioral engagement is captured as quantifiable interaction data; and affective engagement is mediated through algorithmic responses rather than interpersonal warmth (). This digital reconceptualization validates the applicability of the three-dimensional model for capturing engagement in GenAI-assisted learning, while also highlighting that the nature and drivers of each dimension are uniquely shaped by the technological medium.
Therefore, grounded in the established theoretical foundations, this study adopts the three-dimensional model, comprising cognitive, behavioral, and emotional engagement, to examine the impact of generative artificial intelligence on learning engagement.
2.2. GenAI and Learning Engagement
Endowed with the features of initiative, adaptability, and generativity, GenAI has revolutionized the dynamic regulation and feedback mechanism of the ‘human-in-the-loop’ paradigm (). It achieves an intelligent leap and a multiplier effect, transitioning from human-guided processes to machine-driven intelligent generation, thus opening up new avenues for the digital advancement of education (). However, the academic community remains divided on whether GenAI can truly enhance learners’ learning engagement. Existing empirical investigations into this core issue have yielded three primary conclusions:
Certain studies underscore that GenAI exerts a notably positive influence on fostering Learning engagement (). Its real-time feedback mechanism and efficient content-generation capabilities enable the customization of learning pathways that align with learners’ cognitive levels. This not only enhances academic performance (), but also cultivates crucial competencies like critical thinking and problem-solving ().
Other studies have uncovered the potential adverse effects of GenAI implementation. For instance, excessive reliance by learners might constrain their creative thinking, causing them to fall into rigid cognitive patterns and impeding the development of innovative capabilities (). Moreover, problems such as data quality deficiencies and inherent biases could mislead learners (), giving rise to cognitive inaccuracies and ethical hazards ().
However, Some research indicates that there is no statistically significant association between GenAI and learning engagement (). A number of scholars have failed to detect any significant improvements in learners’ cognitive abilities attributable to GenAI ().
These studies indicate that due to variations in experimental subjects, model setups, and analytical methods, the academic community has not yet reached a consensus regarding the relationship between GenAI application and learning engagement.
2.3. Research Gap
Previous studies indicate that due to variations in experimental subjects, model setups, and analytical methods, the academic community has not yet reached a consensus on the relationship between GenAI application and learning engagement. Moreover, there is a lack of systematic investigation into the multiple moderating variables and various sub-dimensions of learning engagement (cognition, behavior, and emotion). Given these conflicting findings (positive impact, negative impact and no significant correlation) and limited analysis, it is critical to systematically examine core questions regarding GenAI’s role in learning engagement:
Whether it truly exerts an impact, the magnitude of such an impact, the difference among impacts of sub-dimensions, and the key factors that moderate this relationship.
To address these gaps, this study conducted a meta-analysis following PRISMA guidelines. 31 eligible studies were included, with 91 effect sizes extracted for quantitative analysis via CMA 3.7. The overall and dimensional effects across cognitive, behavioral, and affective dimensions were systematically analyzed. The insights generated will thus serve to advance academic understanding, shape effective pedagogical integration, and inspire the creation of more adaptive and engaging learning tools.
3. Materials and Methods
Meta-analysis, first put forward by British educational psychologist Gene V. Glass in 1976 (), is a quantitative analytical approach. It synthesizes the results of multiple independent studies centered on the same research objective, uncovers the commonalities and differences among similar studies, and thereby derives universal and regular conclusions (). In contrast to the descriptive nature of traditional literature reviews, meta-analysis can compensate for the subjective biases arising from qualitative analysis ().As an effective means to explore variable relationships, meta-analysis has been widely utilized to examine the impacts of self-efficacy (), parental involvement (), and Augmented reality technology () on learning engagement. This provides robust evidence for the deep integration of computer technology and education. With the application of GenAI in education, some researchers have analyzed its influence on learning outcomes.
A meta-analysis was conducted to explore the influence of GenAI on learning engagement. Based on the meta-analysis framework proposed by (), this study systematically conducted literature screening and coding to obtain research data, and used CMA 3.7 to calculate the combined effect size, so as to explore the specific impact of each moderating variable on the effect size. The study followed PRISMA guidelines () to ensure methodological transparency in the review process. It was conducted in the following stages: (1) literature search; (2) study inclusion criteria; (3) study coding. These stages are described in the following subsections (Figure 1).
Figure 1.
PRISMA flow diagram.
3.1. Literature Search
To identify appropriate studies for analysis, a search was conducted in the main databases of WoS (Web of Science), EBSCO, SCOPUS, and IEEE Xplore. The subject search was conducted, and the Boolean logic search query employed was: (TS = (“Generative Artificial Intelligence”) OR TS = (“Generative AI”) OR TS = (“GenAI”) OR TS = (“Chat GPT”) OR TS = (“GPT”) OR TS = (“AIGC”) OR TS = (“Large Language Model”) OR TS = (“LLM”) OR TS = (“Gemi”) OR TS = (“DeepSeek”) OR TS = (“DouBao”) OR TS = (“ERNIE”) OR TS = (“Qwen”) OR TS = (“Claude”)) AND (TS = (“learning engagement”) OR TS = (“learning achievement”) OR TS = (“learning outcome”) OR TS = (“learning performance”) OR TS = (“learning effect”) OR TS = (“learning cognition”) OR TS = (“learning behavior”) OR TS = (“learning emotion”)). To ensure a comprehensive literature collection, a backward snowballing technique was employed. This involved tracking and examining the reference lists of all retrieved publications to identify any additional relevant studies, thereby expanding the scope of the search and minimizing the risk of overlooking pertinent research.
The search date was 18 August 2025. A total of 224 records were initially identified from these databases, including 131 from WoS, 54 from EBSCO, 13 from Scopus, and 26 from IEEE Xplore. This comprehensive search aimed to capture all relevant studies within the research scope. Subsequently, duplicate records (n = 41) and non-English studies (n = 21) were removed, leaving 162 studies for further screening.
3.2. Study Inclusion Criteria
The study inclusion process was designed to identify the most methodologically sound and relevant empirical research for quantitative synthesis. The criteria were applied in a staged screening process to ensure that only high-quality, comparable studies were included in the meta-analysis.
Studies were initially excluded based on practical and methodological grounds, including duplication and language (non-English). Subsequent exclusions were made for substantive reasons: studies outside the scope of the review, non-quantitative articles, and certain publication types—such as books, workshop presentations, and short papers—which typically lack the depth and peer-review rigor of full journal articles. During the full-text assessment, further exclusions were applied to uphold methodological standards. Non-empirical studies (e.g., theoretical or commentary papers) were omitted due to the lack of primary data. Studies without a control or comparison group were excluded to ensure valid effect size estimation for experimental and quasi-experimental contrasts. Additionally, studies reporting incomplete or insufficient data for effect size calculation were removed to maintain analytic integrity. 131 studies were excluded at this stage, resulting in 31 studies with full text for the next phase.
Table 1 presents the inclusion and exclusion criteria for literature at different screening stages.
Table 1.
Inclusion and exclusion criteria for literature.
3.3. Study Coding
The final set of included studies (n = 31) consisted of quantitatively oriented, empirical articles with complete data and appropriate research designs. To ensure the statistical validity of research data and the accuracy of effect size calculation, this study conducted systematic coding immediately after completing literature screening. The coding framework covers four dimensions: basic literature information, research data, dependent variables, and moderating variables, with specific coding rules detailed in Table 2.
Table 2.
Coding rules for selected literature.
To ensure coding reliability, this study adopted a dual-independent coder approach for 31 articles, achieving a coding consistency coefficient of 91%. The formula of coding consistency coefficient is 2M/(N1 + N2), where M is the total number of identical codes between two researchers, and N1 and N2 are t4he respective code numbers of the coders (). For articles and variables with divergent coding results, the research team reached a final consensus through in-depth discussions. This study’s coding framework supported the extraction of key effect-related metrics, allowing for the computation of 91 effect sizes and ensuring a robust and statistically reliable meta-analysis.
4. Results
4.1. Publication Bias
Publication bias arises when the selection of studies for publication is influenced by the direction or strength of their results, often due to preferences of journal editors, reviewers, and researchers. To assess publication bias in the included studies, this study employed multiple methods: visual inspection of funnel plots, the fail-safe number test, Egger’s linear regression test, and Begg’s rank correlation test.
The funnel plot exhibited approximate symmetry in the upper-middle region, with effect sizes distributed fairly evenly on both sides, suggesting an absence of substantial publication bias (Figure 2). The fail-safe number analysis yielded an Nfs value of 4462, substantially exceeding the recommended threshold of 5k + 10 (where k represents the number of effect sizes included). Furthermore, Egger’s linear regression test produced a p-value of 0.263, and Begg’s rank correlation test resulted in a p-value of 0.083, both non-significant (p > 0.05). These consistent findings across multiple tests indicate that publication bias is not a significant concern in the current sample, thereby supporting the validity of proceeding with meta-analysis.
Figure 2.
Funnel plot of publication bias.
4.2. Effect Size and Heterogeneity
To examine the impact of learners’ use of GenAI on learning engagement and to explore heterogeneity across studies, we calculated effect sizes and performed heterogeneity tests (Table 3). Both fixed-effects and random-effects models were applied to estimate the overall effect of GenAI on learning engagement. The fixed-effects model showed an effect size of 0.518 (p < 0.001), while the random-effects model yielded an effect size of 0.645 (p < 0.001). Heterogeneity testing revealed a Q statistic of 868.281 (p < 0.001) and an I2 value of 89.635%. This high level of I2 indicates substantial heterogeneity among the included studies, suggesting that the variation in effect sizes reflects genuine differences in study contexts or methodologies rather than sampling error alone. Due to the significant heterogeneity, the random-effects model, which incorporates between-study variance, is considered more appropriate for estimating the overall effect.
Table 3.
Overall effect size and heterogeneity.
We further evaluated the influence of GenAI on specific dimensions of learning engagement using the random-effects model (Table 4). GenAI use showed a large and statistically significant positive effect on cognitive development (g = 0.952, p < 0.001). For behavioral competence, a moderate yet significant effect was observed (g = 0.481, p < 0.001). Similarly, a moderate positive effect was identified in affective attitude (g = 0.546, p < 0.001). The heterogeneity test across these three dimensions yielded Q = 7.879, p = 0.019, indicating significant variability in the effect of GenAI on different aspects of learning engagement. This suggests that the impact of GenAI may vary substantively across cognitive development, behavioral competence, and affective attitude, possibly due to differences in learner characteristics, intervention designs, or contextual factors. Further moderator analysis is warranted to elucidate the sources of this heterogeneity.
Table 4.
Sub-dimensional effect size and heterogeneity.
4.3. Effect Sizes of Moderator Variables on Learning Engagement
Subgroup analyses were conducted across moderator variables. The results are summarized in Table 5.
Table 5.
Overall effect size of moderator variables.
Educational stage significantly moderated the effect of GenAI. A moderate positive effect was observed in basic education (g = 0.626, p < 0.001), suggesting beneficial but relatively limited engagement support. A stronger effect was found in higher education (g = 0.826, p < 0.001), possibly due to students’ greater self-directed learning abilities and adaptability to GenAI. The effect was non-significant in continuing education (g = 0.353, p > 0.05), which may reflect the limited sample size and the diversity of learners’ backgrounds. Significant heterogeneity was detected across educational stages (p < 0.05).
Intervention duration also served as a significant moderator. Short-term interventions (<1 day) showed a moderate effect (g = 0.537, p < 0.001), indicating that GenAI can rapidly stimulate engagement through novelty. Medium-term interventions (1 day–1 month) yielded the largest effect (g = 0.859, p < 0.001), as learners had sufficient time to adapt and integrate GenAI into their learning process. Long-term use (1–4 months) remained beneficial but decreased effect size (g = 0.647, p < 0.001), possibly due to novelty fading or challenges in sustaining interaction. Heterogeneity was significant across duration subgroups (p < 0.05).
Regarding learning mode, GenAI positively affected both independent (g = 0.683, p < 0.001) and collaborative learning (g = 0.720, p < 0.001). The slightly higher estimate for collaborative settings may stem from GenAI’s ability to facilitate group discussions and knowledge sharing. However, the difference was not statistically significant (p > 0.05).
For interaction approach, text-based interactions resulted in a moderate effect (g = 0.754, p < 0.001), highlighting GenAI’s capabilities in processing and generating language. Multi-modal interactions also showed a positive effect (g = 0.659, p < 0.001). While the difference lacked statistical significance (p > 0.05).
Teacher intervention significantly moderated learning engagement. Without teacher involvement, GenAI still produced a moderate effect (g = 0.613, p < 0.001). With teacher intervention, the effect was stronger (g = 0.832, p < 0.001), indicating that guidance from instructors helps learners use GenAI more effectively and resolve difficulties promptly. The between-group difference was significant (p < 0.05).
4.4. Effect Sizes of Moderator Variables on Multi-Dimension of Learning Engagement
Table 6 presents the effect sizes of GenAI on each dimension of learning engagement across different educational levels. In basic education, GenAI showed significant positive effects on cognitive development (g = 0.777, p < 0.001), behavioral competence (g = 0.580, p < 0.05), and affective attitude (g = 0.453, p < 0.05). In higher education, GenAI produced strong, significant effects on cognitive development (g = 1.295, p < 0.001) and affective attitude (g = 0.700, p < 0.001), while the effect on behavioral competence was not significant (g = 0.381, p > 0.05).In continuing education, GenAI had a significant positive effect on behavioral competence (g = 0.537, p < 0.05), a non-significant positive trend in cognitive development (g = 0.353, p > 0.05), and a non-significant negative effect on affective attitude (g = −0.014, p > 0.05). Heterogeneity was higher in higher education studies (p < 0.01) than in basic (p > 0.05) or continuing education (p > 0.05) studies, indicating greater variability in the effects of GenAI in higher education settings.
Table 6.
Effect size of educational level on sub-dimensions of learning engagement.
Table 7 presents the effect sizes of GenAI on each dimension of learning engagement across different experimental durations. In short-term experiments, GenAI showed a non-significant positive trend in behavioral competence (g = 0.192), and tentative effects in cognitive and affective domains. Medium-term interventions produced strong, significant effects across all dimensions: cognitive development (g = 1.067, p < 0.001), behavioral competence (g = 0.525, p < 0.05), and affective attitude (g = 0.944, p < 0.001). In long-term applications, significant benefits persisted for cognitive development (g = 0.853, p < 0.001) and behavioral competence (g = 0.545, p < 0.01), but the effect on affective attitude was not significant (g = 0.409, p > 0.05). Heterogeneity was higher in short-term studies (p < 0.05) than in medium (p > 0.05) or long-term (p > 0.05) studies, indicating greater variability in shorter interventions.
Table 7.
Effect size of duration on sub-dimensions of learning engagement.
Table 8 presents the effect sizes of GenAI on each dimension of learning engagement across different learning modes. In self-regulated learning, GenAI showed significant positive effects on cognitive development (g = 0.951, p < 0.001), behavioral competence (g = 0.413, p < 0.05), and affective attitude (g = 0.548, p < 0.01). In collaborative learning, GenAI produced strong, significant effects on cognitive development (g = 0.949, p < 0.001), behavioral competence (g = 0.620, p < 0.001), and affective attitude (g = 0.550, p < 0.001). Heterogeneity was higher in self-regulated learning studies (p < 0.05) than in collaborative learning (p > 0.05) studies, indicating a trend of greater variability in the effects of GenAI in self-regulated learning settings.
Table 8.
Effect size of learning mode on sub-dimensions of learning engagement.
Table 9 presents the effect sizes of GenAI on each dimension of learning engagement across different interactive approaches. For the text-only approach, GenAI had a significant positive effect on cognitive development (g = 1.077, p < 0.001) and behavioral competence (g = 0.533, p < 0.05), while the effect on affective attitude was non-significant (g = 0.446, p > 0.05). In the multi-modal approach, GenAI produced strong, significant effects on cognitive development (g = 0.892, p < 0.001), behavioral competence (g = 0.438, p < 0.05), and affective attitude (g = 0.570, p < 0.001). Heterogeneity was similar between text-only (p > 0.05) and multi-modal (p > 0.05) approaches, indicating no significant difference in the variability of GenAI’s effects between these two interactive modes.
Table 9.
Effect size of interactive approaches on sub-dimensions of learning engagement.
Table 10 presents the effect sizes of GenAI on each dimension of learning engagement with and without teacher intervention. Without teacher intervention, GenAI had significant positive effects on cognitive development (g = 0.745, p < 0.001), behavioral competence (g = 0.473, p < 0.05), and affective attitude (g = 0.553, p < 0.001). With teacher intervention, GenAI produced even stronger, significant effects on cognitive development (g = 1.372, p < 0.001), behavioral competence (g = 0.490, p < 0.05), and affective attitude (g = 0.531, p < 0.05). Heterogeneity was higher in the with teacher intervention group (p < 0.01) compared to the without—teacher intervention group (p > 0.05), indicating greater variability in the effects of GenAI when teacher intervention is involved.
Table 10.
Effect size of teacher intervention on sub-dimensions of learning engagement.
5. Discussion
5.1. Responses to the Three Research Questions
In response to the first research question, focused on GenAI’s overall effect on learning engagement and its manifestation across sub-dimensions, results confirm that GenAI exerts a significant positive impact on overall learning engagement. This impact varies notably across the three sub-dimensions of learning engagement: GenAI demonstrates the strongest effect on cognitive development, followed by affective attitude and behavioral competence.
The finding of GenAI’s overall significant effect aligns with the ‘technology-enhanced engagement’ framework, which posits that adaptive and interactive technologies can amplify learners’ investment in learning processes (). GenAI’s real-time feedback, personalized content generation, and ability to adapt to individual cognitive levels likely contribute to this positive impact, addressing the core challenge of ‘one-size-fits-all’ instruction that often limits engagement in traditional educational settings. The prominent effect on cognitive development is consistent with ()’s findings that GenAI enhances conceptual understanding and problem-solving through targeted scaffolding. This suggests GenAI excels at fostering deep cognitive processing, a key driver of long-term learning retention. The moderate effect on affective attitude highlights GenAI’s potential to mitigate negative emotions, such as anxiety and boredom, and enhance positive ones, such as interest and satisfaction. This may stem from GenAI’s ability to provide immediate positive reinforcement, like affirming correct responses, and reduce frustration by offering on-demand support (). However, the smaller effect size relative to cognitive development indicates that GenAI’s emotional impact is more context-dependent; it may be less effective at addressing deep-seated affective barriers without additional human support. The weakest, yet still significant, effect on behavioral competence raises important questions about GenAI’s role in shaping observable learning behaviors. While GenAI increases interaction frequency and task persistence, reflecting in behavioral engagement metrics, it may not fully replace the social and motivational cues provided by in-person interactions (). This finding underscores the need to view GenAI as a complement to, rather than a replacement for, social learning contexts.
Regarding the second research question, exploring the influence of key moderating variables on the GenAI-learning engagement relationship, five core variables are identified as significant moderators. Educational stage matters significantly: GenAI is most effective in higher education, followed by basic education, while its effect in continuing education is non-significant. Intervention duration presents a “medium-term golden period,” with medium-term use yielding the largest effect, surpassing short-term and long-term interventions. Teacher intervention significantly amplifies GenAI’s effect, with stronger impacts when teachers provide guidance. Learning mode (independent vs. collaborative) and interaction approach (text-only vs. multi-modal) both support positive effects of GenAI, though the differences between subgroups are not statistically significant.
The outstanding effect in higher education can be attributed to three key factors: learner autonomy, task complexity, and sample limitations in continuing education. First, higher education students typically exhibit greater self-directed learning skills, such as exploring niche topics and customizing study plans, enabling them to leverage GenAI’s flexibility more effectively than younger learners (). Second, some tasks like academic writing and research projects often require higher-order cognitive skills that GenAI is well-suited to support. Third, the non-significant effect in continuing education may reflect the small sample size (k = 10) and diverse learner backgrounds with varying prior knowledge and time constraints, which reduce the consistency of GenAI’s impact. In the aspect of duration, ‘medium-term peak’ aligns with the ‘technology adaptation cycle’—initially, learners may be motivated by novelty, but true engagement requires time to integrate GenAI into routine learning. The decline in long-term effect size may reflect ‘engagement fatigue’ or the need for periodic updates to GenAI tools to maintain relevance (). Teacher intervention supports the “human-AI synergy” hypothesis, which argues that teachers provide critical scaffolding—such as guiding learners to use GenAI ethically, interpreting GenAI outputs, and addressing misconceptions—that GenAI alone cannot deliver (). In addition, GenAI is versatile across learning contexts: in independent settings, it acts as a ‘personal tutor’ that provides individualized practice, while in collaborative settings, it functions as a ‘facilitator’ that summarizes group discussions and generates shared resources (). And GenAI’s core engagement benefits, such as content relevance and feedback speed, are not dependent on multi-modal features like images and audio ().
For the third research question, examining how moderators specifically affect each sub-dimension of learning engagement, contextualized heterogeneous effects emerge. Higher education sees GenAI strongly boosting cognitive and affective engagement but not behavioral engagement, while basic education benefits all three sub-dimensions with the smallest effect on affective engagement, and continuing education only significantly impacts behavioral engagement. Medium-term interventions drive the strongest effects across all sub-dimensions, short-term interventions only significantly impact cognitive engagement, and long-term interventions maintain effects on cognitive and behavioral engagement but not affective engagement. Collaborative learning strengthens GenAI’s effect on behavioral engagement, while independent learning shows higher heterogeneity in cognitive engagement. Text-only interactions excel in cognitive engagement but not affective engagement, whereas multi-modal interactions benefit all three sub-dimensions, especially affective engagement. Teacher intervention most strongly amplifies cognitive engagement and shows a slightly smaller effect on affective engagement.
In higher education, GenAI’s effect on behavioral competence is non-significant, suggesting that higher education students may rely on GenAI to complete tasks like writing papers without increasing active participation in activities such as class discussions. In basic education, younger learners require more in-person emotional support that includes teacher praise—support GenAI cannot fully provide (). In continuing education, GenAI only significantly impacts behavioral competence, reflecting the practical, skill-focused goals of adult learners. Medium-term interventions drive the strongest effects across all sub-dimensions; short-term interventions only significantly impact cognitive development, likely due to the novelty of GenAI stimulating initial cognitive investment but not sustaining behavioral or affective engagement over time (). Long-term interventions maintain significant effects on cognitive and behavioral dimensions but not affective attitude, indicating that emotional engagement may require periodic renewal. Such renewal could involve new GenAI features or varied tasks to avoid stagnation. Collaborative learning amplifies GenAI’s effect on behavioral competence compared to independent learning, as group settings create opportunities for observable behaviors (). In contrast, independent learning shows slightly higher heterogeneity in cognitive development, reflecting variability in how learners use GenAI on their own. For instance, some learners may use it for deep learning, while others use it only for task completion. Text-only interactions have no significant effect on affective attitude, as text-based feedback may lack the emotional nuance of multi-modal cues like audio praise. Multi-modal interactions, by contrast, benefit all sub-dimensions by catering to diverse learning preferences and creating more immersive experiences (). Teacher intervention most strongly amplifies GenAI’s effect on cognitive development, as teachers can guide learners to use GenAI for higher-order thinking. This might involve comparing GenAI outputs to primary sources instead of using GenAI for passive content consumption (). For affective attitude, teacher intervention maintains a significant effect but is slightly smaller than without intervention. This counterintuitive finding may reflect teachers’ focus on cognitive rather than emotional outcomes when integrating GenAI into their teaching.
5.2. Theoretical Implications
This study contributes to educational technology and learning engagement theory in three key ways. First, it enriches the ‘technology-enhanced engagement’ framework () by empirically verifying that GenAI exerts differential impacts across the cognitive, affective, and behavioral dimensions of learning engagement. Prior theoretical discussions often viewed technology’s effect on engagement as a unified construct, but this study’s findings clarify that GenAI’s strength lies in fostering cognitive development, filling the theoretical gap of how specific technological features like real-time feedback and personalized content generation map to distinct engagement dimensions.
Second, the study extends the ‘human-AI synergy’ hypothesis () by highlighting the moderating role of teacher intervention. Theoretical debates previously focused on either AI’s independent value or teachers’ traditional roles, but this research demonstrates that teacher guidance amplifies GenAI’s cognitive benefits while introducing heterogeneity in outcomes. This finding refines the hypothesis by emphasizing that effective GenAI integration requires not just technological functionality but also aligned instructional scaffolding—providing a theoretical basis for understanding the interdependence of human and AI agents in educational contexts.
Third, the identification of contextual boundary conditions like educational stage and intervention duration advances theory on technology’s contextual adaptability (). The ‘medium-term golden period’ of GenAI effectiveness and the variations across educational stages challenge the assumption that technology’s impact is consistent across settings. This adds nuance to existing theories by illustrating that technological tools like GenAI interact with contextual factors in non-linear ways, which in turn necessitates that theoretical models account for such contextual dynamics to more precisely explain how technology shapes learning engagement.
5.3. Practical Implications
The findings offer actionable guidance for educators, educational technology developers, and policymakers to optimize GenAI’s role in enhancing learning engagement. For educators, the study highlights the need to tailor GenAI use to educational stages: in higher education, educators can leverage GenAI for complex cognitive tasks such as research support and critical thinking exercises given students’ stronger autonomy; in basic education, integrating multi-modal GenAI to boost affective engagement and pairing it with frequent teacher feedback to reinforce behavioral participation is more effective (); in continuing education, GenAI should be designed for task-specific, short-term skill-building to align with adult learners’ fragmented goals.
For technology developers, the results emphasize two key design directions: first, prioritizing features that support cognitive development like adaptive problem-solving prompts while incorporating multi-modal elements such as audio-visual feedback to address affective needs (); second, integrating engagement renewal mechanisms including dynamic task updates and personalized challenge adjustments to mitigate the decline in affective engagement during long-term use.
For policymakers, the study underscores the importance of targeted professional development for teachers. Since teacher intervention amplifies GenAI’s benefits, training programs should focus on building teachers’ capacity to integrate GenAI into instructional design, guiding students to use GenAI for higher-order thinking, and adapting their guidance to diverse learner needs (). Additionally, policymakers should support research on GenAI’s application in understudied contexts like continuing education to ensure equitable access to effective GenAI-enhanced learning experiences.
5.4. Limitations
This study has several limitations to note when interpreting findings. First, the literature search was restricted to four databases and excluded non-English studies, which may introduce publication and language bias. Future research could expand to other databases and include non-English literature to reduce bias.
Second, the moderator analysis focused on five variables but ignored factors like learners’ prior AI experience, GenAI tool types, and subject domains. These could further explain impact heterogeneity. Future studies may add these factors to moderator models.
6. Conclusions
This meta-analysis synthesizes 31 eligible empirical studies to investigate how GenAI influences learning engagement and the contextual factors shaping its effects, addressing the study’s three core research questions. Key findings show GenAI delivers a notable positive impact on overall learning engagement, though this varies across dimensions: it most strongly boosts cognitive engagement, has a moderate effect on affective engagement, and has a weaker yet significant influence on behavioral engagement. Moderator analyses reveal nuanced, sub-dimension-specific effects: the positive influence is most pronounced in higher education, shows significant benefits across all three sub-dimensions in basic education, and is non-significant in continuing education; medium-duration interventions yield the largest effects; and teacher intervention significantly amplifies gains in cognitive engagement. Both learning mode and interaction approach exert significant positive effects on overall learning engagement, while their impacts on the sub-dimensions did not show significant heterogeneity. These findings contribute to educational technology and learning engagement theory and offer actionable guidance for educators, educational technology developers, and policymakers for the advancement of personalized and engaging learning ecosystems.
Author Contributions
Conceptualization, K.W.; Methodology, K.W. and Z.G.; Software, K.W. and Z.G.; Data curation, K.W. and Z.G.; Writing—original draft, K.W. and Z.G.; Writing—review & editing, Z.G.; Visualization, K.W.; Supervision, K.W.; Project administration, K.W.; Funding acquisition, K.W. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Humanities and Social Sciences Fund of Ministry of Education of China (No. 24YJC880131) and the Natural Science Foundation of Hubei province (No. 2024AFB393).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Dataset available on request from the authors.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Alsaiari, O., Baghaei, N., Lahza, H., Lodge, J. M., Boden, M., & Khosravi, H. (2025). Emotionally enriched AI-generated feedback: Supporting student well-being without compromising learning. Computers & Education, 239, 105363. [Google Scholar] [CrossRef]
- Amofa, B., Kamudyariwa, X. B., Fernandes, F. A. P., Osobajo, O. A., Jeremiah, F., & Oke, A. (2025). Navigating the complexity of generative artificial intelligence in higher education: A systematic literature review. Education Sciences, 15(7), 826. [Google Scholar] [CrossRef]
- Bhatia, A. P., Lambat, A., & Jain, T. (2024). A comparative analysis of conventional and chat-generative pre-trained transformer-assisted teaching methods in dndergraduate dental education. Cureus, 16(5), e60006. [Google Scholar] [CrossRef]
- Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. John Wiley & Sons. [Google Scholar]
- Chang, C., & Hwang, G. (2024). Promoting professional trainers’ teaching and feedback competences with ChatGPT: A question-exploration-evaluation training mode. Educational Technology & Society, 27(2), 405–421. [Google Scholar] [CrossRef]
- Chang, C., & Hwang, G. (2025). ChatGPT-facilitated professional development: Evidence from professional trainers’ learning achievements, self-worth, and self-confidence. Interactive Learning Environments, 33(1), 883–900. [Google Scholar] [CrossRef]
- Chaturvedi, A., Yadav, N., & Dasgupta, M. (2025). Tech-driven transformation: Unravelling the role of artificial intelligence in shaping strategic decision-making. International Journal of Human–Computer Interaction, 41(19), 12305–12324. [Google Scholar] [CrossRef]
- Chen, J., Mokmin, N. A. M., & Su, H. (2025). Integrating generative artificial intelligence into design and art course: Effects on student achievement, motivation, and self-efficacy. Innovations in Education and Teaching International, 62(5), 1431–1446. [Google Scholar] [CrossRef]
- Chen, Y., & Hou, H. (2024). A mobile contextualized educational game framework with ChatGPT interactive scaffolding for employee ethics training. Journal of Educational Computing Research, 62(7), 1517–1542. [Google Scholar] [CrossRef]
- Chen, Y., Wang, Y., Wüstenberg, T., Kizilcec, R. F., Fan, Y., Li, Y., Lu, B., Yuan, M., Zhang, J., Zhang, Z., Geldsetzer, P., Chen, S., & Bärnighausen, T. (2025). Effects of generative artificial intelligence on cognitive effort and task performance: Study protocol for a randomized controlled experiment among college students. Trials, 26(1), 244. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y., Zhang, X., & Hu, L. (2024). A progressive prompt-based image-generative AI approach to promoting students’ achievement and perceptions in learning ancient Chinese poetry. Educational Technology & Society, 27(2), 284–305. [Google Scholar] [CrossRef]
- Chiu, M., & Hwang, G. (2025). Enhancing student creative and critical thinking in generative AI-empowered creation: A mind-mapping approach. Interactive Learning Environments, 1–22. [Google Scholar] [CrossRef]
- Chu, H., Hsu, C., & Wang, C. (2025). Effects of AI-generated drawing on students’ learning achievement and creativity in an ancient poetry course. Educational Technology & Society, 28(2), 295–309. [Google Scholar] [CrossRef]
- Coates, H. (2007). A model of online and general campus—Based student engagement. Assessment & Evaluation in Higher Education, 32(2), 121–141. [Google Scholar] [CrossRef]
- De Silva, D., Kaynak, O., El-Ayoubi, M., Mills, N., Alahakoon, D., & Manic, M. (2025). Opportunities and challenges of generative artificial intelligence: Research, education, industry engagement, and social impact. IEEE Industrial Electronics Magazine, 19(1), 30–45. [Google Scholar] [CrossRef]
- Essel, H. B., Vlachopoulos, D., Nunoo-Mensah, H., & Amankwa, J. O. (2025). Exploring the impact of VoiceBots on multimedia programming education among Ghanaian university students. British Journal of Educational Technology, 56(1), 276–295. [Google Scholar] [CrossRef]
- Farahmandpour, Z., & Voelkel, R. (2025). Teacher turnover factors and school-level influences: A meta-analysis of the literature. Education Sciences, 15(2), 219. [Google Scholar] [CrossRef]
- Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59–109. [Google Scholar] [CrossRef]
- Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3–8. [Google Scholar] [CrossRef]
- Gong, X., Li, Z., & Qiao, A. (2025). Impact of generative AI dialogic feedback on different stages of programming problem solving. Education and Information Technologies, 30(7), 9689–9709. [Google Scholar] [CrossRef]
- Güner, H., & Er, E. (2025). AI in the classroom: Exploring students’ interaction with ChatGPT in programming learning. Education and Information Technologies, 30(9), 12681–12707. [Google Scholar] [CrossRef]
- Herianto, H., Sofroniou, A., Fitrah, M., Rosana, D., Setiawan, C., Rosnawati, R., Widihastuti, W., Jusmiana, A., & Marinding, Y. (2024). Quantifying the relationship between self-efficacy and mathematical creativity: A meta-analysis. Education Sciences, 14(11), 1251. [Google Scholar] [CrossRef]
- Hu, Y., Chen, J., & Hwang, G. (2025). A ChatGPT-supported QIVE model to enhance college students’ learning performance, problem-solving and self-efficacy in art appreciation. Interactive Learning Environments, 1–14. [Google Scholar] [CrossRef]
- Huang, S., Wen, C., Bai, X., Li, S., Wang, S., Wang, X., & Yang, D. (2025). Exploring the Application Capability of ChatGPT as an Instructor in Skills Education for Dental Medical Students: Randomized Controlled Trial. Journal of Medical Internet Research, 27, e68538. [Google Scholar] [CrossRef]
- Johnson, E. S., Crawford, A., Moylan, L. A., & Zheng, Y. (2018). Using evidence-Centered design to create a special educator observation system. Educational Measurement: Issues and Practice, 37(2), 35–44. [Google Scholar] [CrossRef]
- Kuh, G. D. (2003). What we’re learning about student engagement from NSSE: Benchmarks for effective educational practices. Change: The Magazine of Higher Learning, 35(2), 24–32. [Google Scholar] [CrossRef]
- Li, G., Li, Z., Wu, X., & Zhen, R. (2022). Relations between class competition and primary school students’ academic achievement: Learning anxiety and learning engagement as mediators. Frontiers in Psychology, 13, 775213. [Google Scholar] [CrossRef] [PubMed]
- Li, H. (2023). Effects of a ChatGPT-based flipped learning guiding approach on learners’ courseware project performances and perceptions. Australasian Journal of Educational Technology, 39(5), 40–58. [Google Scholar] [CrossRef]
- Linnenbrink-Garcia, L., Rogat, T. K., & Koskey, K. L. K. (2011). Affect and engagement during small group instruction. Contemporary Educational Psychology, 36(1), 13–24. [Google Scholar] [CrossRef]
- Liu, C., Wang, D., Gu, X., Hwang, G., Tu, Y., & Wang, Y. (2025). Facilitating pre-service teachers’ instructional design and higher-order thinking with generative AI: An integrated approach with the peer assessment and concept map. Journal of Research on Technology in Education, 1–26. [Google Scholar] [CrossRef]
- Liu, M., Zhang, L. J., & Biebricher, C. (2024). Investigating students’ cognitive processes in generative AI-assisted digital multimodal composing and traditional writing. Computers & Education, 211, 104977. [Google Scholar] [CrossRef]
- Liu, S., Zhang, S., & Dai, Y. (2025). Do mobile games improve language learning? A meta-analysis. Computer Assisted Language Learning, 1–29. [Google Scholar] [CrossRef]
- Liu, Z., Tang, Q., Ouyang, F., Long, T., & Liu, S. (2024). Profiling students’ learning engagement in MOOC discussions to identify learning achievement: An automated configurational approach. Computers & Education, 219, 105109. [Google Scholar] [CrossRef]
- Livinƫi, R., Gunnesch-Luca, G., & Iliescu, D. (2021). Research self-efficacy: A meta-analysis. Educational Psychologist, 56(3), 215–242. [Google Scholar] [CrossRef]
- Ng, D. T. K., Tan, C. W., & Leung, J. K. L. (2024). Empowering student self-regulated learning and science education through ChatGPT: A pioneering pilot study. British Journal of Educational Technology, 55(4), 1328–1353. [Google Scholar] [CrossRef]
- Niloy, A. C., Akter, S., Sultana, N., Sultana, J., & Rahman, S. I. U. (2024). Is Chatgpt a menace for creative writing ability? An experiment. Journal of Computer Assisted Learning, 40(2), 919–930. [Google Scholar] [CrossRef]
- Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. PLoS Medicine, 18(3), e1003583. [Google Scholar] [CrossRef]
- Pan, X., & Lin, L. (2025). Online presence and technology-enhanced language learning experience: A multivariate relationship of learning aspiration, perceived authenticity and learning engagement. Educational Technology Research and Development, 73(2), 1179–1203. [Google Scholar] [CrossRef]
- Prestridge, S., Fry, K., & Kim, E.-J. A. (2025). Teachers’ pedagogical beliefs for Gen AI use in secondary school. Technology, Pedagogy and Education, 34(2), 183–199. [Google Scholar] [CrossRef]
- Runge, I., Lazarides, R., Rubach, C., Richter, D., & Scheiter, K. (2023). Teacher-reported instructional quality in the context of technology-enhanced teaching: The role of teachers’ digital competence-related beliefs in empowering learners. Computers & Education, 198, 104761. [Google Scholar] [CrossRef]
- Shi, S. J., Li, J. W., & Zhang, R. (2024). A study on the impact of generative artificial intelligence supported situational interactive teaching on students’ ‘flow’ experience and learning effectiveness—A case study of legal education in China. Asia Pacific Journal of Education, 44(1), 112–138. [Google Scholar] [CrossRef]
- Sibley, L., Lachner, A., Plicht, C., Fabian, A., Backfisch, I., Scheiter, K., & Bohl, T. (2024). Feasibility of adaptive teaching with technology: Which implementation conditions matter? Computers & Education, 219, 105108. [Google Scholar] [CrossRef]
- Soleimani, S., Farrokhnia, M., Van Dijk, A., & Noroozi, O. (2025). Educators’ perceptions of generative AI: Investigating attitudes, barriers and learning needs in higher education. Innovations in Education and Teaching International, 62(5), 1598–1613. [Google Scholar] [CrossRef]
- Song, Y., Huang, L., Zheng, L., Fan, M., & Liu, Z. (2025). Interactions with generative AI chatbots: Unveiling dialogic dynamics, students’ perceptions, and practical competencies in creative problem-solving. International Journal of Educational Technology in Higher Education, 22(1), 12. [Google Scholar] [CrossRef]
- Soriano-Sánchez, J. G. (2025). The impact of ICT on primary school students’ natural science learning in support of diversity: A meta-analysis. Education Sciences, 15(6), 690. [Google Scholar] [CrossRef]
- Svendsen, K., Askar, M., Umer, D., & Halvorsen, K. H. (2024). Short-term learning effect of ChatGPT on pharmacy students’ learning. Exploratory Research in Clinical and Social Pharmacy, 15, 100478. [Google Scholar] [CrossRef]
- Tan, C. Y., & Gao, L. (2025). Evaluating methodological quality of meta-analyses: A case study of meta-analyses on associations between parental involvement and students’ learning outcomes. Educational Research Review, 47, 100678. [Google Scholar] [CrossRef]
- Tang, Q., Deng, W., Huang, Y., Wang, S., & Zhang, H. (2025). Can generative artificial intelligence be a good teaching assistant?—An empirical analysis based on generative AI-assisted teaching. Journal of Computer Assisted Learning, 41(3), e70027. [Google Scholar] [CrossRef]
- Tian, S., Wang, D., Wang, J., & Zhong, W. (2025). Empowering GenAI with a guidance-based approach in MTPE learning: Effect on student translators’ cognitive process, final translation quality and learning motivation. The Interpreter and Translator Trainer, 1–26. [Google Scholar] [CrossRef]
- Vasilaki, M.-M., Zafeiroudi, A., Tsartsapakis, I., Grivas, G. V., Chatzipanteli, A., Aphamis, G., Giannaki, C., & Kouthouris, C. (2025). Learning in nature: A systematic review and meta-analysis of outdoor recreation’s role in youth development. Education Sciences, 15(3), 332. [Google Scholar] [CrossRef]
- Woodruff, E. (2024). AI detection of human understanding in a Gen-AI tutor. AI, 5(2), 898–921. [Google Scholar] [CrossRef]
- Wu, J., Jiang, H., & Chen, S. (2024). Augmented reality technology in language learning: A meta-analysis. Language Learning & Technology, 28(1), 1–23. [Google Scholar] [CrossRef]
- Wu, J., Wang, K., He, C., Huang, X., & Dong, K. (2021). Characterizing the patterns of China’s policies against COVID-19: A bibliometric study. Information Processing & Management, 58, 102562. [Google Scholar] [CrossRef]
- Xu, T., Liu, Y., Jin, Y., Qu, Y., Bai, J., Zhang, W., & Zhou, Y. (2025). From recorded to AI-generated instructional videos: A comparison of learning performance and experience. British Journal of Educational Technology, 56(4), 1463–1487. [Google Scholar] [CrossRef]
- Yang, G., Rong, Y., Wang, Y., Zhang, Y., Yan, J., & Tu, Y. (2025). How generative artificial intelligence supported reflective strategies promote middle school students’ conceptual knowledge learning: An empirical study from China. Interactive Learning Environments, 1–26. [Google Scholar] [CrossRef]
- Yang, M., Wu, X., & Deris, F. D. (2025). Exploring EFL learners’ positive emotions, technostress and psychological well-being in AI-assisted language instruction with/without teacher support in Malaysia. British Educational Research Journal. [Google Scholar] [CrossRef]
- Yang, T., Hsu, Y., & Wu, J. (2025). The effectiveness of ChatGPT in assisting high school students in programming learning: Evidence from a quasi-experimental research. Interactive Learning Environments, 33, 1–18. [Google Scholar] [CrossRef]
- Ye, X., Zhang, W., Zhou, Y., Li, X., & Zhou, Q. (2025). Improving students’ programming performance: An integrated mind mapping and generative AI chatbot learning approach. Humanities and Social Sciences Communications, 12(1), 558. [Google Scholar] [CrossRef]
- Zeng, H., & Xin, Y. (2025). Comparing learning persistence and engagement in asynchronous and synchronous online learning, the role of autonomous academic motivation and time management. Interactive Learning Environments, 33(1), 276–295. [Google Scholar] [CrossRef]
- Zhao, G., Yang, L., Hu, B., & Wang, J. (2025). A generative artificial intelligence (AI)-based human-computer collaborative programming learning method to improve computational thinking, learning attitudes, and learning achievement. Journal of Educational Computing Research, 63(5), 1059–1087. [Google Scholar] [CrossRef]
- Zhong, H., & Xu, J. (2025). The effect of fragmented learning ability on college students’ online learning satisfaction: Exploring the mediating role of affective, behavioral, and cognitive engagement. Interactive Learning Environments, 1–15. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).