Next Article in Journal
Deriving Empirically Grounded NFR Specifications from Practitioner Discourse: A Validated Methodology Applied to Trustworthy APIs in the AI Era
Previous Article in Journal
A Unified Clustering-Based Anonymization for Privacy-Preserving Data Publishing with Multidimensional Privacy Quantification
Previous Article in Special Issue
AI-Supported Gamification in E-Learning: A Systematic Review of Adaptive Architectures and Cognitive Outcomes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ChatGPT-Assisted Learning Effectiveness and Academic Achievement: A Mechanism-Based Model in Higher Education

by
Ahmed Mohamed Hasanein
1,* and
Bassam Samir Al-Romeedy
2
1
Management Department, College of Business Administration, King Faisal University, Al-Ahsaa P.O. Box 380, Saudi Arabia
2
Tourism Studies Department, Faculty of Tourism and Hotels, University of Sadat City, Sadat City 32897, Egypt
*
Author to whom correspondence should be addressed.
Information 2026, 17(3), 303; https://doi.org/10.3390/info17030303
Submission received: 25 January 2026 / Revised: 10 March 2026 / Accepted: 20 March 2026 / Published: 21 March 2026
(This article belongs to the Special Issue Trends in Artificial Intelligence-Supported E-Learning)

Abstract

This study examines the impact of ChatGPT-assisted learning on the academic achievement of hospitality and tourism students in Egyptian public universities, with particular emphasis on the mediating roles of perceived usefulness and self-regulated learning. Drawing conceptually on the Technology Acceptance Model (TAM), the study adopts a contextualized framework that emphasizes perceived usefulness while incorporating ChatGPT-assisted learning effectiveness as a learning-oriented driver within generative AI-supported educational environments. A quantitative research design was employed using an online survey administered to students who actively used ChatGPT for academic purposes. A total of 689 valid responses were collected from nine public universities and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) to test the proposed hypotheses. The findings indicate that ChatGPT-Assisted Learning Effectiveness (CALE) has a statistically significant and positive direct effect on academic achievement (AA; β = 0.386, T = 3.946, p < 0.001, 95% CI = 0.192–0.561) and strongly predicts perceived usefulness (β = 0.673, T = 9.274, p < 0.001, 95% CI = 0.581–0.742) and self-regulated learning (β = 0.707, T = 10.734, p < 0.001, 95% CI = 0.621–0.779). In turn, PU (β = 0.281, T = 3.854, p < 0.001, 95% CI = 0.142–0.417) and SRL (β = 0.220, T = 2.418, p = 0.016, 95% CI = 0.041–0.356) significantly enhance academic achievement. Mediation analyses further confirm that PU (β = 0.189, T = 2.366, p = 0.018, 95% CI = 0.031–0.284) and SRL (β = 0.156, T = 3.699, p < 0.001, 95% CI = 0.102–0.301) partially mediate the relationship between CALE and academic achievement. These findings offer important theoretical insights by contextualizing TAM’s performance-related logic within generative AI-driven learning environments and refining its application to academic outcome settings, while highlighting self-regulated learning as a critical explanatory mechanism. From a practical perspective, the study provides valuable implications for educators and policymakers by emphasizing the need to promote students’ perceived usefulness of ChatGPT and foster learner autonomy, positioning generative AI as a powerful pedagogical support tool for enhancing academic success in hospitality and tourism education.

1. Introduction

The rapid expansion of artificial intelligence-driven applications has begun to fundamentally alter the nature of learning in higher education, shifting educational technologies from passive delivery tools to active participants in the learning process [1]. Among these applications, ChatGPT-5.3 has emerged as one of the most influential innovations due to its ability to generate coherent explanations, respond dynamically to students’ queries, and simulate dialogic academic interaction [2]. Unlike earlier digital learning tools that primarily supported information access or content management, ChatGPT offers a form of interactive cognitive assistance that aligns closely with how students process information, clarify understanding, and construct knowledge independently [3]. This transformation has positioned ChatGPT-assisted learning as a phenomenon of growing importance rather than a transient technological trend [4].
The importance of ChatGPT-assisted learning in education lies in its capacity to support learning in ways that extend beyond traditional instructional boundaries [3]. By providing immediate feedback, personalized explanations, and continuous academic support, ChatGPT enables students to engage with course material at their own pace and according to their individual learning needs [5]. This is particularly relevant in higher education contexts, where students are expected to assume increasing responsibility for their learning outcomes [6]. In fields such as tourism and hospitality education, where theoretical understanding must be integrated with applied reasoning and contextual awareness, students often require iterative clarification and reflective engagement with content [7]. ChatGPT creates a learning environment in which such engagement becomes more accessible, potentially enhancing students’ academic experiences and learning effectiveness [8].
Nevertheless, the educational value of ChatGPT cannot be inferred solely from its technological capabilities [9]. Its contribution to learning depends fundamentally on how effective it is perceived and utilized by students within their academic routines [5]. Effectiveness in ChatGPT-assisted learning encompasses more than functional accuracy; it reflects the extent to which students believe that the tool meaningfully enhances their learning efficiency, understanding, and academic performance [10]. Central to this evaluation is perceived usefulness, which represents students’ cognitive judgment regarding whether ChatGPT adds tangible value to their learning activities [11]. When students perceive ChatGPT as useful, they are more likely to integrate it purposefully into their study practices rather than using it sporadically or superficially [12,13].
At the same time, the effectiveness of ChatGPT-assisted learning is closely intertwined with students’ capacity for self-regulated learning [10]. Self-regulated learning refers to the processes through which learners plan, monitor, and evaluate their learning behaviors in pursuit of academic goals [14]. ChatGPT has the potential to support these processes by enabling students to set learning objectives, seek clarification when comprehension gaps arise, and reflect on their understanding through iterative questioning [3]. In this sense, ChatGPT does not replace learners’ cognitive effort but may instead act as a scaffold that strengthens their ability to regulate learning autonomously [15]. Consequently, effectiveness emerges as a multidimensional construct that operates through both perceived usefulness and self-regulated learning mechanisms [4].
Understanding how these mechanisms translate into academic achievement is critical, as academic performance remains the primary indicator of learning success in higher education [16]. Academic achievement is rarely the result of direct exposure to technology; rather, it reflects how learners cognitively evaluate and behaviorally employ available learning resources [17,18,19]. When students perceive ChatGPT as useful and engage with it in a self-regulated manner, they are more likely to adopt effective learning strategies, sustain academic effort, and achieve higher performance outcomes [12]. Conversely, without positive perceptions of usefulness or structured self-regulatory behaviors, the presence of advanced AI tools may have little impact on academic achievement [4]. This perspective highlights the necessity of examining indirect pathways rather than assuming a straightforward relationship between ChatGPT use and academic performance.
From a theoretical standpoint, this study draws on the Technology Acceptance Model (TAM) as a conceptual lens for understanding how students’ performance-related beliefs shape their engagement with ChatGPT-assisted learning. Within this framework, perceived usefulness functions as a central determinant of technology-related behavior, influencing whether students consider ChatGPT relevant to their academic tasks [20]. Although the original TAM emphasizes both perceived usefulness and perceived ease of use as key determinants of technology adoption, the present study focuses specifically on perceived usefulness because it represents the performance-oriented belief most directly associated with learning outcomes. However, explaining learning outcomes requires extending this perspective beyond acceptance decisions alone toward understanding how acceptance-related beliefs are enacted within learning activities. Importantly, the present study does not aim to replicate traditional TAM adoption pathways; rather, it reorients TAM’s core performance-related logic toward explaining academic outcomes within generative AI-supported learning environments. In this context, ChatGPT-assisted learning effectiveness is introduced as a context-specific external driver that reflects the pedagogical value of generative AI in supporting learning processes. In this regard, self-regulated learning is not treated as an independent theoretical framework, but rather as a behavioral expression of technology acceptance that reflects how perceived usefulness translates into structured learning engagement [16]. While perceived usefulness explains why students are motivated to rely on ChatGPT as a learning tool, self-regulated learning explains how this reliance is operationalized through planning, monitoring, and strategic regulation of learning activities within academic routines [21,22]. Such positioning shifts the analytical emphasis from technology adoption to mechanism-based academic performance explanation. From a TAM-based perspective, these self-directed learning behaviors represent the mechanism through which acceptance-related beliefs acquire academic relevance. Academic achievement therefore emerges as the performance outcome of how students’ cognitive evaluations of ChatGPT are translated into sustained, goal-oriented learning behavior [23].
Despite the growing scholarly interest in artificial intelligence in education, existing research remains fragmented in explaining how AI-supported learning translates into measurable academic achievement. Prior studies have predominantly emphasized students’ acceptance and attitudes toward AI-based technologies [1,24], while parallel research streams have examined self-regulated learning as a predictor of academic success within digital environments [15]. However, these lines of inquiry have largely evolved independently, resulting in limited theoretical integration between technology acceptance perspectives and learning regulation mechanisms. Moreover, generative AI systems such as ChatGPT differ substantially from earlier educational technologies due to their adaptive, dialogic, and cognitively interactive features, raising the question of whether traditional adoption-focused explanations are sufficient to account for their educational impact. Although perceived usefulness has been widely investigated as a determinant of technology engagement, considerably less attention has been devoted to understanding how it operates in conjunction with self-regulated learning to influence academic performance outcomes. This lack of integrative, mechanism-based explanation is particularly evident in tourism and hospitality education, where AI-related research remains predominantly descriptive rather than theory-driven and outcome-oriented. Accordingly, existing scholarship provides limited insight into the cognitive and behavioral mediation processes through which ChatGPT-assisted learning effectiveness becomes academically consequential. Consequently, there remains a clear need for a theoretically coherent model that explains how ChatGPT-assisted learning effectiveness contributes to academic achievement through interconnected cognitive and behavioral pathways.
In response to these gaps, the present study seeks to examine how the effectiveness of ChatGPT-assisted learning influences students’ academic achievement through perceived usefulness and self-regulated learning. By focusing on students in higher education, the study aims to provide context-specific insights into how AI-supported learning environments shape academic outcomes. Empirically, the study draws on data collected from students enrolled in hospitality and tourism programs at public universities, providing a context-specific examination of AI-assisted learning within a particular higher education setting. In doing so, it advances a theoretically integrated model that moves beyond direct-effect assumptions and emphasizes the cognitive and behavioral mechanisms underlying effective AI-assisted learning. Unlike prior studies that have examined AI adoption, perceived usefulness, or self-regulated learning in isolation, the present research offers an integrative, mechanism-based explanation of how ChatGPT-assisted learning effectiveness translates into academic achievement. Specifically, it (1) conceptualizes effectiveness as a multidimensional external determinant within the TAM framework, (2) positions self-regulated learning as a behavioral enactment of acceptance-related beliefs rather than as an independent theoretical stream, and (3) empirically examines the mediating roles of perceived usefulness and self-regulated learning within the underexplored context of tourism and hospitality education. Thus, the study contributes not by reiterating expected TAM relationships but by articulating how generative AI effectiveness becomes academically consequential through structured cognitive and regulatory processes. The study contributes to the literature by refining the application of TAM’s performance-related logic within AI-assisted learning contexts, empirically clarifying the mediating roles of perceived usefulness and self-regulated learning, and enriching current understanding of ChatGPT’s role in discipline-specific higher education contexts.

2. Literature Review and Hypothesis Development

Artificial intelligence can enhance learning outcomes not merely through automation or information delivery, but by reshaping how learners interact with academic content and regulate their study behaviors [25]. Prior research suggests that AI-assisted systems improve learning outcomes when they provide immediate feedback, adaptive explanations, and opportunities for iterative engagement with course material [26]. Such affordances reduce unnecessary cognitive load and support conceptual clarification, thereby strengthening perceived usefulness and meaningful engagement. Moreover, AI tools such as ChatGPT can foster self-regulated learning by enabling learners to plan study activities, monitor understanding, and adjust learning strategies through continuous interaction. Educational research consistently demonstrates that these regulatory behaviors are directly associated with improved academic achievement [1]. Accordingly, AI enhances learning outcomes when its effectiveness translates into performance-oriented beliefs and sustained self-regulatory engagement rather than passive system use [27]. However, despite growing empirical evidence supporting these mechanisms, existing studies remain theoretically fragmented and rarely integrate these processes within a unified explanatory framework.
Although research on artificial intelligence in education has expanded rapidly, prior studies remain distributed across multiple conceptual streams that are rarely integrated within a unified explanatory framework. Existing work has focused separately on (1) the effectiveness of generative AI tools such as ChatGPT; (2) technology acceptance constructs, particularly perceived usefulness; and (3) self-regulated learning as a predictor of academic achievement. To provide a clearer synthesis of these streams and clarify the theoretical positioning of the present study, Table 1 summarizes key empirical contributions related to ChatGPT-assisted learning effectiveness, perceived usefulness, self-regulated learning, and academic achievement.
As shown in Table 1, prior research reflects three dominant yet largely disconnected strands. First, AI-related studies emphasize adoption and perceptions without systematically explaining how effectiveness translates into academic outcomes. Second, research on perceived usefulness identifies its importance for technology engagement but rarely examines its performance-related role within AI-supported learning contexts. Third, self-regulated learning has been extensively linked to academic achievement; however, its integration within generative AI environments such as ChatGPT remains limited. More critically, empirical studies that combine these constructs within a unified TAM-informed explanatory framework remain scarce. While the Technology Acceptance Model traditionally incorporates both perceived usefulness and perceived ease of use as key determinants of technology adoption, the present study focuses specifically on perceived usefulness because it represents the performance-related belief most directly associated with learning outcomes in AI-supported educational environments. This fragmentation is particularly evident in tourism and hospitality education, where AI-related research remains predominantly exploratory rather than theory-driven and outcome-oriented. The present study addresses this gap by integrating ChatGPT-assisted learning effectiveness, perceived usefulness, and self-regulated learning within a coherent model explaining academic achievement.
Importantly, the proposed approach differs from existing approaches in three fundamental respects. Firstly, rather than examining AI adoption, perceived usefulness, or self-regulated learning as isolated predictors, the present study integrates these constructs within a single explanatory framework informed by the Technology Acceptance Model (TAM). Rather than replicating the full TAM adoption structure, the study draws on TAM’s performance-related logic by emphasizing perceived usefulness as a central cognitive belief while introducing ChatGPT-assisted learning effectiveness as a context-specific external determinant reflecting the pedagogical value of generative AI-supported learning environments. This shift moves the analytical focus from isolated variable testing to a structured explanation of how AI-assisted learning becomes academically consequential. Secondly, while prior research has typically treated self-regulated learning as an independent theoretical stream, this study reconceptualizes it as a behavioral manifestation of acceptance-related beliefs, thereby embedding learning regulation within the broader TAM-informed framework. This theoretical positioning clarifies the process through which performance-related beliefs are translated into observable academic outcomes, rather than assuming direct or loosely connected effects. Thirdly, unlike predominantly descriptive or perception-focused studies in tourism and hospitality education, the present research advances a mechanism-based, outcome-oriented model that empirically explains how ChatGPT-assisted learning effectiveness contributes to academic achievement through mediation mechanisms. In doing so, the study departs from adoption-centered or usage-frequency models and instead articulates a causal pathway linking experiential evaluations, cognitive beliefs, regulatory behaviors, and performance outcomes within a unified explanatory structure.

2.1. Technology Acceptance Model (TAM)

The theoretical grounding of this study draws on the Technology Acceptance Model (TAM), which offers a well-established explanation of how individuals evaluate and engage with technological systems [40]. The original TAM posits that technology adoption is primarily influenced by two key cognitive beliefs: perceived usefulness and perceived ease of use. Central to this model is the assumption that users’ interactions with technology are shaped less by objective system features and more by their perceptions of its utility in enhancing task performance [41]. Within educational settings, this assumption is particularly salient, as learning technologies become pedagogically meaningful only when students believe that they contribute directly to academic effectiveness [16]. In the context of ChatGPT-assisted learning, TAM provides a conceptual lens for understanding how students cognitively appraise the system’s role in supporting their learning activities [42]. While TAM traditionally incorporates both perceived usefulness and perceived ease of use, the present study focuses specifically on perceived usefulness because it represents the performance-related belief most directly associated with learning outcomes in AI-supported educational environments. Importantly, the present study does not seek to merely replicate traditional TAM relationships, but to reposition its core logic within the context of generative AI systems, whose interactive and adaptive characteristics differ substantially from conventional information systems.
In line with TAM’s logic, ChatGPT-assisted learning effectiveness is conceptualized as an external experiential determinant that shapes students’ perceptions of usefulness [43]. Rather than referring solely to technical sophistication, effectiveness reflects students’ subjective evaluations of how well ChatGPT supports comprehension, learning efficiency, and academic task completion [44]. Prior research suggests that experiential judgments regarding system performance play a decisive role in forming perceived usefulness, which remains a central determinant of technology-related engagement [45]. By conceptualizing effectiveness as a performance-relevant experiential construct within AI-assisted learning, the study extends TAM beyond adoption-focused contexts toward outcome-oriented educational environments.
Perceived usefulness represents the cognitive belief through which students interpret the academic value of ChatGPT. However, while usefulness explains why students may rely on a learning tool, academic outcomes depend on how such beliefs are translated into structured learning behaviors. Within this perspective, self-regulated learning is conceptualized as a task-oriented behavioral mechanism that may operate alongside perceived usefulness in explaining academic performance [3,17,18]. Students who perceive ChatGPT as useful are more likely to integrate it into planning, monitoring, and strategic learning practices. This positioning shifts the analytical focus from technology acceptance per se to the mechanisms through which acceptance-related beliefs become academically consequential.
Academic achievement is conceptualized as the performance outcome associated with both cognitive evaluations and learning regulation processes. Educational research consistently demonstrates that learners who effectively regulate their learning tend to achieve stronger academic results due to sustained effort and adaptive strategy use [16]. Accordingly, ChatGPT-assisted learning effectiveness may influence academic achievement both directly—through enhanced task support—and indirectly through perceived usefulness and self-regulated learning mechanisms. Such an approach moves beyond traditional TAM applications centered on intention or usage, and instead explicates how AI-supported learning environments translate into measurable academic performance.
In this vein, rather than formally extending TAM by incorporating all of its traditional components, the present study builds on TAM’s core logic regarding performance-related beliefs and applies it within an AI-assisted learning context. By integrating perceived usefulness and self-regulated learning as complementary explanatory mechanisms, the proposed framework offers a theoretically grounded account of how ChatGPT-assisted learning effectiveness contributes to academic achievement in higher education contexts. Therefore, the contribution of the study lies not in reiterating TAM’s foundational propositions but in advancing a mechanism-based model that connects AI-assisted effectiveness, cognitive evaluation, learning regulation, and academic outcomes within a unified explanatory structure.

2.2. ChatGPT-Assisted Learning Effectiveness and Academic Achievement

The effectiveness of ChatGPT-assisted learning is expected to influence students’ academic achievement insofar as it enhances the functional quality of the learning process itself [46]. In higher education, academic achievement is closely tied to students’ ability to comprehend complex material, apply concepts to academic tasks, and manage learning demands under cognitive and temporal constraints [16]. When ChatGPT operates as an effective learning aid—by offering precise explanations, enabling iterative clarification, and supporting task-oriented learning—it strengthens the alignment between students’ learning efforts and academic requirements, thereby increasing the likelihood of improved academic outcomes rather than merely enhancing perceived convenience or satisfaction [45,46]. This relationship is theoretically consistent with the performance-oriented logic of the Technology Acceptance Model (TAM), which suggests that users’ evaluations of a system’s instrumental value shape performance-related outcomes. Within this perspective, effectiveness functions as an external experiential condition that influences whether a technology is perceived as capable of enhancing task performance [47]. In academic settings, task performance is inherently linked to achievement indicators such as assessment outcomes and overall academic standing [48]. Accordingly, when ChatGPT-assisted learning is experienced as effective in supporting academic tasks, its influence is expected to extend beyond adoption decisions and manifest in achievement-related outcomes in line with TAM-informed performance reasoning rather than traditional technology adoption pathways [49,50]. Recent empirical evidence reinforces this reasoning by suggesting that the academic value of generative AI tools such as ChatGPT is contingent on how effectively they are embedded within learning processes rather than on their technological novelty. Some studies indicate that when ChatGPT is effectively used to support understanding and facilitate task completion, students are more likely to experience measurable academic benefits, as effectiveness enhances meaningful engagement with learning activities [17,19,28]. Other research highlights that the perceived instructional value of ChatGPT extends beyond usage frequency, as its effectiveness contributes to the formation of performance-oriented beliefs that shape learning outcomes [29]. In addition, contextual evidence suggests that the educational impact of ChatGPT materializes when students experience it as a reliable academic support tool integrated into their learning routines [30]. Hence, the following hypothesis is suggested:
H1. 
ChatGPT-assisted learning effectiveness has a positive effect on academic achievement.

2.3. ChatGPT-Assisted Learning Effectiveness and Perceived Usefulness

Perceived usefulness in the context of ChatGPT-assisted learning represents students’ evaluative judgment regarding whether the system genuinely contributes to their academic functioning [31]. Rather than emerging as an immediate reaction to technology exposure, usefulness develops as students repeatedly assess how effectively ChatGPT supports core learning activities such as understanding complex material, structuring academic tasks, and reducing unnecessary cognitive effort [46]. This experiential basis of usefulness formation has been emphasized in educational research, which suggests that students’ beliefs about usefulness are grounded in observable learning benefits rather than in initial expectations [4]. Consistent with the logic of the Technology Acceptance Model (TAM), perceived usefulness reflects a performance-related belief shaped through users’ experiential evaluation of a system’s instrumental contribution to task accomplishment. Within academic environments, this contribution is evaluated through the extent to which a learning tool enhances efficiency, clarity, and progress toward learning goals [51,52]. When ChatGPT-assisted learning demonstrates effectiveness in supporting these dimensions, usefulness becomes cognitively anchored as a performance-relevant belief rather than a superficial attitude [3,4]. Dzimar et al. [34] show that students are more likely to perceive AI-supported learning tools as useful when these tools function effectively in addressing concrete learning needs. Zhuo et al. [33] indicate that improvements in learning efficiency and reductions in cognitive effort strengthen usefulness perceptions among learners. As well, Cai et al. [31] emphasize that sustained interaction with effective AI-based support is necessary for usefulness judgments to stabilize over time. Moghavvemi and Jam [32] argue that system effectiveness operates as a critical antecedent shaping beliefs about a technology’s performance value in educational settings. Similarly, Barakat et al. [53] demonstrate that students’ perceptions of ChatGPT’s usefulness are closely tied to its effectiveness in supporting academic task completion and learning goal attainment. So, the following hypothesis is developed:
H2. 
ChatGPT-assisted learning effectiveness has a positive effect on perceived usefulness.

2.4. ChatGPT-Assisted Learning Effectiveness and Self-Regulated Learning

Self-regulated learning in higher education is shaped not only by students’ personal capabilities, but also by the extent to which learning environments provide conditions that support autonomy, control, and reflective engagement [35]. Learning tools that allow students to regulate the pace of study, revisit explanations, and seek clarification when challenges arise are more likely to encourage planning, monitoring, and strategic adjustment of learning activities [54]. In this context, AI-based learning tools such as ChatGPT may play a formative role in influencing how students organize and manage their learning processes [55]. When ChatGPT-assisted learning is experienced as effective, it enables students to engage with academic content in an active and self-directed manner rather than through passive information consumption [56]. By facilitating iterative questioning, immediate clarification, and ongoing interaction with learning material, ChatGPT creates opportunities for learners to evaluate their understanding and adapt their learning strategies accordingly [3]. The effectiveness of ChatGPT in supporting these interactions therefore functions as a contextual condition that fosters the gradual development of self-regulatory behaviors [45]. This interpretation is consistent with the broader logic of technology acceptance research, including TAM-informed perspectives, which suggest that technologies perceived as effective and valuable are more likely to become integrated into users’ routines and practices. In educational settings, such embedding is reflected not in frequency of use alone, but in how a tool becomes integrated into students’ approaches to managing learning tasks [57]. When ChatGPT is perceived as effective in supporting learning activities, students are more inclined to incorporate it into their self-directed learning routines, thereby strengthening self-regulated learning behaviors [4]. Lee et al. [3] show that students are more likely to engage in planning and monitoring when ChatGPT functions as an effective academic support tool. Wang et al. [12,13] demonstrate that effective use of ChatGPT encourages independent problem-solving and greater control over learning processes. Likewise, Zhu [36] highlights that sustained engagement with ChatGPT supports learner autonomy by facilitating reflection and strategic adjustment during academic work. Al-Abri [10] further indicates that ChatGPT-assisted learning effectiveness enhances self-regulation by enabling learners to control learning pace and evaluate progress. Also, Zhou and Li [37] argue that when ChatGPT operates effectively as a learning companion, repeated interaction promotes habitual self-regulatory behaviors through continuous feedback and learner-driven exploration. Therefore, the following hypothesis is assumed:
H3. 
ChatGPT-assisted learning effectiveness has a positive effect on self-regulated learning.

2.5. Perceived Usefulness and Academic Achievement

Academic achievement can be understood as the outcome of how students interpret and respond to the academic value of the learning resources available to them. Students are continuously exposed to multiple learning tools and informational inputs, yet only those perceived as genuinely useful tend to be integrated into sustained learning practices [16]. Perceived usefulness therefore acts as a cognitive lens through which students evaluate whether a learning tool merits continued reliance within their academic routines [58]. Within digitally supported learning environments, usefulness perceptions influence academic achievement by shaping how students make sense of learning experiences rather than by directly dictating behavior [59]. Within the framework of the TAM, perceived usefulness represents a performance-related belief concerning the extent to which a technological system contributes to improving task accomplishment and learning outcomes. When students believe that a learning tool meaningfully contributes to understanding course material or accomplishing academic tasks, they are more likely to internalize it as part of their academic problem-solving repertoire [16]. From this perspective, perceived usefulness operates as an interpretive belief that connects learning support to achievement-related outcomes, consistent with technology acceptance views that associate usefulness with task accomplishment and academic progress [4]. Hussain and Anwar [60] demonstrate that students achieve higher academic outcomes when they perceive learning tools as genuinely useful for facilitating understanding and task completion. Moreover, Onal et al. [61] show that perceived usefulness shapes students’ commitment to learning activities that align with academic objectives, thereby influencing achievement levels. Shahraniza and Abubaker [62] further highlight that usefulness perceptions are associated with deeper engagement with academic content, which translates into improved academic achievement. Al-Abdi et al. [63] emphasize that perceived usefulness guides learners’ strategic academic choices, while Klarin et al. [64] argue that usefulness functions as a cognitive filter that enables students to distinguish between learning tools that contribute to achievement and those that do not. Accordingly, the following hypothesis is developed:
H4. 
Perceived usefulness has a positive effect on academic achievement.

2.6. Self-Regulated Learning and Academic Achievement

Academic achievement within technology-supported learning environments can be understood as the outcome of how students translate technology-related beliefs into sustained learning behavior. Performance outcomes depend not merely on system use but on how technology-related beliefs influence learners’ task-oriented behaviors according to TAM-informed perspectives on technology use in educational contexts. In academic contexts characterized by cumulative content and continuous assessment demands, achievement is therefore more closely associated with how learners regulate their engagement with learning tasks over time than with short-term exposure to instructional inputs [54]. Self-regulated learning represents this behavioral mechanism through which acceptance-related beliefs become academically consequential. When students perceive learning technologies as supportive of their academic goals, they are more likely to engage in behaviors that reflect planning, monitoring, and strategic adjustment of learning activities [65,66]. These behaviors enable learners to structure their study efforts, allocate cognitive and motivational resources effectively, and respond adaptively to academic challenges. Within this framework, self-regulated learning can be interpreted as the behavioral process through which perceived usefulness and learning support are translated into academic outcomes. Accordingly, academic achievement can be conceptualized as the cumulative result of self-regulated learning behaviors enacted in response to academic demands [67]. Students who consistently regulate their learning are better positioned to convert instructional resources and learning technologies into measurable academic outcomes [68]. Variations in academic achievement can therefore be expected to reflect differences in the extent to which students engage in self-regulated learning as part of their task-oriented interaction with learning environments [69]. Ergen and Kanadlı [38] also show that self-regulated learning enhances achievement by aligning study strategies with assessment demands. Wolters and Hussain [70] highlight the role of self-regulation in maintaining persistence and effective effort management. Lim et al. [39] provide evidence that integrating cognitive and motivational resources through self-regulation facilitates academic achievement over time. So, the following hypothesis is demonstrated:
H5. 
Self-regulated learning has a positive effect on academic achievement.

2.7. The Mediating Role of Perceived Usefulness

Academic achievement in AI-supported learning contexts cannot be attributed solely to whether a system performs well, but rather to how learners cognitively translate that performance into academic relevance. Prior studies indicate that effective learning support does not automatically yield academic gains unless students interpret such effectiveness as directly aligned with their academic objectives [10,49]. Accordingly, the academic impact of ChatGPT-assisted learning depends on learners’ ability to cognitively connect experienced support with progress toward academic goals, rather than on system effectiveness in isolation [4,71]. This interpretive process is consistent with TAM-informed perspectives, which emphasize that technology-related outcomes are shaped by users’ evaluative beliefs rather than by direct system effects alone. Within this perspective, technologies influence outcomes insofar as users believe they meaningfully contribute to task accomplishment [47]. This assumption becomes particularly salient in higher education, where academic tasks simultaneously function as learning activities and indicators of achievement, rendering task execution inseparable from academic success [48]. In ChatGPT-assisted learning environments, effectiveness acquires academic significance only when students repeatedly experience the system as relevant to managing learning demands and accomplishing academic tasks, allowing these experiences to be cognitively organized as judgments of usefulness [72]. Once perceived usefulness is cognitively established, it begins to exert influence on academic achievement by shaping how students engage with learning tasks. Learners who view ChatGPT as academically useful are more likely to integrate it purposefully into their study practices, regulate effort allocation, and persist in achievement-oriented activities [73]. In this sense, perceived usefulness operates as the cognitive translation mechanism through which experienced effectiveness becomes academically consequential [45]. Accordingly, the impact of ChatGPT-assisted learning effectiveness on academic achievement is expected to unfold indirectly through shifts in students’ performance-related beliefs, reflecting the performance-oriented logic underlying the Technology Acceptance Model (TAM). So, the following hypothesis is proposed:
H6. 
Perceived usefulness mediates the relationship between ChatGPT-assisted learning effectiveness and academic achievement.

2.8. The Mediating Role of Self-Regulated Learning

In higher education, academic achievement is best explained by the degree to which learning support becomes aligned with students’ habitual ways of managing academic work across tasks and semesters [74]. From this standpoint, the value of AI-assisted learning systems lies in their capacity to synchronize with learners’ ongoing academic routines rather than in isolated instances of instructional assistance [75]. In technology-supported settings, such synchronization is achieved through a behavioral channel that stabilizes learning practices and embeds technological support within students’ regular approaches to studying [76]. Self-regulated learning delineates this channel by specifying how learners actively coordinate planning, monitoring, and evaluative activities in response to academic demands [54]. When ChatGPT-assisted learning is experienced as effective, it expands learners’ control over when and how they engage with content—allowing them to revisit explanations, diagnose misunderstandings, and proceed independently through learning challenges [3,49]. These affordances do not simply facilitate task completion; they recalibrate how learners organize their study behavior. With repeated use under such conditions, students are more likely to adopt consistent goal-setting practices, maintain strategic oversight of progress, and refine learning strategies, culminating in stable self-regulated learning patterns [77]. According to TAM-informed perspectives on technology use in learning environments, the academic consequences of system effectiveness emerge when external conditions shape learners’ task-oriented behaviors rather than merely enabling system use. In educational contexts, task-oriented behavior is reflected in learners’ regulation of study processes and effort distribution, not in usage frequency alone [78]. Consequently, the effectiveness of ChatGPT-assisted learning is expected to influence academic achievement through its role in aligning technological support with self-regulated learning routines that sustain academic engagement over time [4]. On this basis, self-regulated learning constitutes the behavioral conduit through which the effectiveness of ChatGPT-assisted learning is expressed in academic performance. Hence, the following hypothesis is formulated:
H7. 
Self-regulated learning mediates the relationship between ChatGPT-assisted learning effectiveness and academic achievement.
To elucidate the hypothesized relationships and underlying mechanisms, the conceptual framework of this study is presented in Figure 1.

3. Method

3.1. Sampling and Data Collection

This study investigates the impact of ChatGPT-assisted learning on perceived academic achievement among hospitality and tourism students, with particular emphasis on the mediating roles of perceived usefulness and self-regulated learning. The conceptual framework was derived from an extensive synthesis of relevant theoretical and empirical scholarship, which informed the development of directional hypotheses linking the focal constructs. To empirically assess the proposed relationships, a structured questionnaire was administered to students enrolled in hospitality and tourism programs at public universities in Egypt. The data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM), enabling simultaneous evaluation of the measurement and structural models, including the assessment of indirect (mediating) effects through bootstrapping procedures.
Prior to full deployment, a pilot study was conducted with thirty hospitality and tourism students who met the eligibility criterion of active ChatGPT use for academic purposes (at least once per week during the semester). The pilot aimed to evaluate clarity, readability, contextual relevance, and completion time. Participants provided structured written feedback and brief explanatory comments regarding wording, item sequencing, and interpretability. Based on this input, minor refinements were implemented to enhance linguistic precision and contextual alignment within the higher-education setting. The survey instrument was prepared in both Arabic and English to ensure linguistic appropriateness. Translation accuracy was verified through a back-translation process by two bilingual experts.
A purposive sampling strategy was employed to recruit undergraduate and postgraduate students from nine Egyptian public universities offering hospitality and tourism programs, selected to represent diverse geographic regions and institutional contexts. The study adopted a cross-sectional design, with data collected between October and December 2025 through an online survey. To ensure clarity in the sampling procedure, the survey link was distributed through multiple coordinated academic communication channels, including official university learning management systems, course mailing lists, faculty-coordinated communication platforms, student course groups, and relevant academic networks. In addition, peer referrals were used to extend the reach of the survey among eligible students. Participation was limited to students who self-identified as active ChatGPT users for academic tasks.
To ensure data integrity, several procedural safeguards were implemented. Institutional email verification and IP filtering were used to prevent duplicate submissions, while mandatory-response settings ensured complete data for all core measurement items. Completed responses were further screened for unusually short completion times and irregular response patterns prior to inclusion in the final dataset. After data screening and quality checks, the final dataset consisted of 689 valid responses, representing a well-defined population of active ChatGPT users within hospitality and tourism programs at Egyptian public universities. Accordingly, the empirical context of the study reflects the characteristics of students enrolled in public higher education institutions within this specific academic field, and the findings should therefore be interpreted within this educational context.
Given the single-source, cross-sectional design of the study, several procedural and statistical measures were implemented to minimize potential common method variance (CMV). Procedurally, participants were assured of anonymity and confidentiality, survey items were carefully worded to reduce ambiguity and social desirability bias, and predictor and criterion constructs were psychologically separated by intermixing items and using varied scale formats. Statistically, CMV was primarily assessed using Harman’s single-factor test, where an exploratory factor analysis indicated that the first unrotated factor accounted for only 38% of the total variance, below the commonly accepted threshold, suggesting that a single factor did not dominate the covariance structure. Complementing this, full collinearity diagnostics were performed within the PLS-SEM model, with all variance inflation factor (VIF) values below conservative cut-off levels, further confirming that CMV is unlikely to meaningfully affect the results. Collectively, these procedural and statistical precautions provide a robust mitigation of common method bias without relying on marker variables or latent method factor approaches, ensuring consistency and transparency in the study’s reporting.

3.2. Measures

The survey instrument was organized into five principal sections. The first section gathered respondents’ demographic information, including gender, academic level, and frequency of ChatGPT usage for academic purposes. The second section assessed ChatGPT-Assisted Learning Effectiveness using a 5-item scale developed by [31], designed to capture students’ perceptions of how ChatGPT supports learning efficiency, comprehension, and engagement. The third section measured Perceived Usefulness through a 4-item scale adapted from [33], evaluating the extent to which students believed ChatGPT enhanced their learning performance and academic productivity. The fourth section examined Self-Regulated Learning via a 5-item scale developed by [79], assessing students’ abilities to plan, monitor, and evaluate their learning processes when using ChatGPT. The final section measured Students’ Academic Achievement using a 4-item scale adapted from [80], focusing on perceived academic performance and satisfaction with learning outcomes. It is important to note that this measure captures students’ self-reported perceptions of their academic performance rather than objective academic indicators such as GPA or official grades (see Appendix A).
To ensure content validity, the questionnaire underwent a thorough evaluation by a panel of academic experts with specialized knowledge in hospitality education, educational technology, and learning psychology. The experts systematically reviewed each item for clarity, contextual appropriateness, and alignment with the study’s conceptual framework. Their feedback was incorporated to refine item phrasing, enhance conceptual accuracy, and ensure cultural and contextual relevance. Internal consistency and measurement reliability were further confirmed using Cronbach’s alpha (α) coefficients for all constructs. All items were measured on a seven-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree).

3.3. Data Analysis

A total of 800 questionnaires were distributed to hospitality and tourism students enrolled in public universities across Egypt who actively used ChatGPT for educational purposes. Of these, 689 responses were complete and valid, yielding a response rate of 86.1% with no missing data. The sample size exceeded the recommended minimum of 10 respondents per survey item, in line with [81] guidelines. Among the valid responses, 391 participants (56.7%) were male and 298 (43.3%) were female. Most of the respondents (58.3%) were aged between 20 and 25 years, and 56.2% were classified as junior or senior students, reflecting the demographic composition of upper-level undergraduates in hospitality and tourism programs.
The online questionnaire was administered between October and December 2025 via online survey distributed via official university learning management systems and coordinated course communication channels, following the recommendations of [82]. An introductory section provided participants with information about the study’s objectives, ethical considerations, confidentiality, and voluntary participation, followed by the survey link. Informed consent was obtained electronically, and respondents were assured of anonymity. Participants were recruited through academic networks, including student associations, university course groups, and peer referrals. All participants were informed that their involvement was voluntary and that data would be used solely for academic research purposes.
Preliminary data screening and descriptive analyses were conducted using SPSS version 26 to ensure data completeness and accuracy. Subsequently, the measurement and structural models were tested using SmartPLS version 4, following the two-step Structural Equation Modeling (SEM) procedure outlined by [83]. Partial Least Squares SEM (PLS-SEM) was selected due to its suitability for predictive and exploratory research, particularly in behavioral and educational contexts where normality assumptions may not hold [82,84]. To evaluate potential common method variance (CMV), Harman’s single-factor test was conducted following [85], confirming that no single factor accounted for the majority of variance.

4. Results

Measurement Model

To assess potential common method variance (CMV), an Exploratory Factor Analysis (EFA) was performed on the 18 measurement items (see Table 2). The first unrotated factor accounted for 38% of the total variance, well below the 50% threshold recommended by [85], suggesting that CMV is unlikely to compromise the validity of the study’s findings. Convergent validity and internal consistency were evaluated using Average Variance Extracted (AVE), Composite Reliability (CR), and Cronbach’s alpha. According to Hair et al. [82] and Fornell and Larcker [86], AVE values of 0.50 or higher indicate that a construct accounts for at least half of the variance in its indicators, CR values above 0.70 reflect adequate internal consistency, and Cronbach’s alpha coefficients of 0.70 or higher demonstrate acceptable reliability [81]. Furthermore, standardized outer loadings of 0.70 or greater confirm that individual items make meaningful contributions to their respective constructs [82].
All constructs satisfied these benchmarks, demonstrating that each latent variable captured more than 50% of its indicators’ variance and exhibited strong internal consistency. As shown in Table 2, all factor loadings exceeded 0.70, indicating high item reliability. Variance Inflation Factor (VIF) values were below 5 for all items, ruling out multicollinearity concerns [82,87]. To further enhance the robustness of the measurement model, a resampling approach using 5000 bootstrap samples was employed. This procedure provided additional confidence in the stability of factor loadings, CR, and AVE estimates, ensuring that the psychometric properties were not sample-specific and supporting the generalizability of the measurement model. Overall, the model exhibited robust psychometric properties, providing strong evidence of reliability and validity and confirming the suitability of the constructs for subsequent structural model analysis [82,84].
The findings presented in Table 3 indicate that each construct in the proposed model exhibits stronger associations with its own indicators than with those of other constructs, providing clear evidence of discriminant validity. Specifically, the square roots of the AVE values (highlighted in bold along the diagonal) exceed the corresponding inter-construct correlations, in accordance with [86] recommendations. Moreover, all heterotrait–monotrait (HTMT) ratios, reported in brackets, remain below the 0.90 threshold, further confirming the empirical distinctiveness of the constructs. To enhance the robustness of these findings, a resampling approach using 5000 bootstrap samples was employed, providing additional confidence that the discriminant validity results are stable and not sample-specific. Collectively, these results support the discriminant validity of the measurement model, reinforcing the reliability and robustness of the subsequent structural model analyses [82,88].
Before examining the hypothesized relationships, the structural model’s predictive and explanatory power was evaluated. The R2 values for Academic Achievement (AA = 0.580), Perceived Usefulness (PU = 0.450), and Self-Regulated Learning (SRL = 0.500) indicate substantial explanatory power, exceeding the commonly accepted threshold of 0.25 for moderate predictive relevance. Effect sizes (f2) ranged from 0.070 to 0.730, showing small to large impacts of the predictors on the endogenous constructs. The model’s predictive relevance (Q2) values ranged from 0.290 to 0.340, surpassing the recommended minimum of 0 for adequate predictive relevance [82]. The Standardized Root Mean Square Residual (SRMR = 0.041) was below the 0.08 cut-off, indicating good model fit [82,87]. All effects were further validated using bootstrapping with 5000 resamples, providing bias-corrected 95% confidence intervals (CI) to ensure stability and robustness of estimates.
As shown in Table 4 and Figure 2, all hypothesized relationships in the proposed model were statistically significant and positive, underscoring the critical role of ChatGPT-Assisted Learning Effectiveness (CALE) in enhancing students’ academic achievement through the mediating effects of PU and SRL. The first hypothesis tested the direct effect of CALE on AA and was supported [β = 0.386, T = 3.946, p < 0.001, 95% CI = 0.192–0.561], indicating that students who effectively engage with ChatGPT tend to achieve higher academic performance. This finding suggests that integrating ChatGPT as a learning aid improves students’ understanding, problem-solving capabilities, and overall learning outcomes. Similarly, the second hypothesis examined the effect of CALE on PU and was strongly supported [β = 0.673, T = 9.274, p < 0.001, 95% CI = 0.581–0.742]. This result implies that students who perceive ChatGPT as an effective learning tool are more likely to recognize its usefulness in enhancing study efficiency and learning experience. Tangible academic benefits, such as clearer explanations, quicker feedback, and accessible support, significantly strengthen students’ perception of ChatGPT’s educational value.
Regarding the third hypothesis tested the relationship between CALE and SRL and was confirmed [β = 0.707, T = 10.734, p < 0.001, 95% CI = 0.621–0.779]. This demonstrates that students who use ChatGPT effectively are better able to manage their learning processes, including goal setting, time management, and self-assessment. The finding emphasizes that ChatGPT fosters autonomous learning behaviors by providing personalized guidance and facilitating independent inquiry. Likewise, the fourth hypothesis evaluated the impact of PU on AA and was supported [β = 0.281, T = 3.854, p < 0.001, 95% CI = 0.142–0.417], indicating that students who perceive ChatGPT as beneficial for their studies tend to perform better academically. This result reinforces the idea that the perceived value of educational technologies influences motivation, engagement, and learning outcomes. The fifth hypothesis tested the effect of SRL on AA and was confirmed [β = 0.220, T = 2.418, p = 0.016, 95% CI = 0.041–0.356], suggesting that self-regulated learners—those who actively plan, monitor, and reflect on their learning activities—are more likely to achieve superior academic results. ChatGPT’s interactive nature supports such behaviors by allowing learners to take control of their learning pace and strategies.
In terms of the indirect effects, the sixth hypothesis explored the mediating role of PU in the relationship between CALE and AA. The mediation effect was significant [β = 0.189, T = 2.366, p = 0.018, 95% CI = 0.031–0.284], indicating that ChatGPT enhances students’ academic performance indirectly by increasing their perception of its usefulness. Similarly, the seventh hypothesis examined the mediating role of SRL between CALE and AA and was also supported [β = 0.156, T = 3.699, p < 0.001, 95% CI = 0.102–0.301]. This finding highlights that ChatGPT fosters academic success not only through direct support but also by empowering students to take responsibility for their learning. By engaging with ChatGPT to clarify doubts, practice problem-solving, or receive feedback, students strengthen self-regulatory behaviors that contribute to sustained academic improvement.
Overall, the results confirm that ChatGPT-Assisted Learning Effectiveness significantly enhances both Perceived Usefulness and Self-Regulated Learning, which in turn positively influence students’ academic achievement. These findings provide empirical evidence that educational AI tools like ChatGPT can play a transformative role in higher education by promoting learner autonomy, perceived utility, and improved academic outcomes.

5. Discussion

The findings of this study contribute to the growing body of research on AI-supported learning by moving beyond descriptive assessments of technology use toward a more explanatory understanding of how such tools influence academic outcomes. Rather than treating ChatGPT as an inherently beneficial educational intervention, the results emphasize that its impact on academic achievement unfolds through a set of interrelated cognitive and behavioral mechanisms. The results further indicate that the educational consequences of AI-assisted learning are not determined solely by the presence of the technology itself, but rather by how students cognitively evaluate its usefulness and behaviorally incorporate it into their learning processes, consistent with the TAM-informed perspective guiding this study. This perspective aligns with recent calls in the literature to shift from technology-centric evaluations toward learner-centered explanations that account for how students interpret, integrate, and act upon AI-supported learning resources.
The results highlighted that ChatGPT-assisted learning effectiveness has a positive effect on academic achievement. This result is consistent with prior empirical evidence suggesting that the academic value of ChatGPT is contingent on how effectively it supports learning tasks rather than on its mere presence in the learning environment. In line with this finding, Eljack et al. [30] argue that ChatGPT contributes to academic achievement when students experience it as a reliable and effective academic support tool integrated into their learning routines, which reinforces the interpretation that effectiveness is the key driver of achievement-related outcomes. Similarly, Hsiao and Chiu [29] show that when learners evaluate AI-supported learning systems in terms of their effectiveness in enhancing task performance, the impact extends beyond technology adoption to influence academic achievement, lending theoretical support to the observed effect. Evidence from Liu et al. [17,18,19] further substantiates this relationship by demonstrating that ChatGPT-assisted learning effectiveness enhances academic achievement when it is aligned with instructional objectives and supports understanding and task completion, suggesting that effectiveness operates through meaningful learning engagement rather than surface-level use. Consistent with this perspective, Olayinka et al. [28] report that academic achievement improves when generative AI tools function effectively within formal learning contexts, highlighting that effectiveness determines whether AI-assisted learning translates into measurable academic outcomes. Moreover, Yu et al. [89] provide additional analytical depth by showing that ChatGPT-assisted learning effectiveness influences academic achievement through its impact on how students manage and regulate their learning activities, indicating that the observed effect reflects sustained changes in learning behavior rather than short-term performance gains.
Furthermore, the results indicated that ChatGPT-assisted learning effectiveness positively affects perceived usefulness, indicating that students’ usefulness judgments are formed as a direct response to how effectively ChatGPT supports academic learning tasks. This pattern is consistent with the findings of Cai et al. [31], who show that perceived usefulness strengthens when AI-assisted learning systems demonstrate clear effectiveness in facilitating understanding and improving learning efficiency, suggesting that usefulness evaluations are grounded in experienced instructional value. Similarly, Barakat et al. [53] report that ChatGPT-assisted learning effectiveness enhances perceived usefulness when students observe tangible support for academic task completion, reinforcing the view that effectiveness precedes and shapes usefulness formation. From an evaluative standpoint, Moghavvemi and Jam [32] explain that system effectiveness directly informs perceived usefulness by influencing learners’ beliefs about performance enhancement, which aligns with the observed effect. Evidence from Zhuo et al. [33] further indicates that reductions in cognitive effort and smoother task execution—outcomes of effective AI-supported learning—are associated with higher levels of perceived usefulness. Moreover, Dzimar et al. [34] emphasize that perceived usefulness emerges through learners’ sustained experience with AI-based learning tools that effectively address concrete learning needs, highlighting that usefulness is a cumulative cognitive response to consistent ChatGPT-assisted learning effectiveness.
Similarly, the results revealed that ChatGPT-assisted learning effectiveness positively affects self-regulated learning, reflecting that students’ capacity to manage and regulate their learning is enhanced when ChatGPT delivers effective instructional support. In line with this view, Lee et al. [3] demonstrate that when ChatGPT-assisted learning effectiveness is evident, students are more inclined to engage in goal setting and self-monitoring activities, indicating that effectiveness supports deliberate regulation of learning processes. Evidence from Al-Abri [10] also indicates that ChatGPT-assisted learning effectiveness contributes to self-regulated learning by allowing learners to control learning pace, revisit explanations, and independently resolve learning difficulties. From a task-management perspective, Wang et al. [12,13] report that effective engagement with ChatGPT strengthens self-regulated learning by encouraging learners to manage problem-solving processes autonomously across academic tasks. Moreover, Zhu [36] highlights that sustained exposure to effective ChatGPT-assisted learning promotes strategic adjustment and learner autonomy over time, suggesting that the observed effect reflects the consolidation of self-regulated learning practices rather than short-term behavioral changes.
In addition, the results depicted that perceived usefulness has a positive effect on academic achievement, suggesting that academic success is partly shaped by how students judge the practical academic value of the learning tools they rely on. When students perceive a learning tool as useful, they are more likely to integrate it into high-stakes academic activities where performance matters, which provides a plausible explanation for the observed effect on academic achievement. This interpretation is consistent with Klarin et al. [64], who argue that perceived usefulness operates as a cognitive filter through which learners decide which tools deserve sustained reliance, thereby influencing achievement-related outcomes. In a similar vein, Shahraniza and Abubaker [62] show that students’ academic achievement improves when usefulness perceptions guide their learning decisions toward tools that support understanding and continuity in learning. Evidence from Al-Abdi et al. [63] indicates that perceived usefulness contributes to academic achievement by directing learners’ strategic choices in applying learning resources to meet academic goals. From an evaluative standpoint, Hussain and Anwar [60] demonstrate that usefulness perceptions strengthen academic achievement when students associate learning technologies with tangible improvements in task accomplishment. Likewise, Onal et al. [61] report that perceived usefulness positively affects academic achievement by shaping learners’ commitment to academic activities that are aligned with assessment and performance requirements.
Moreover, the results reported that self-regulated learning positively affects academic achievement, underscoring that academic success is closely linked to students’ ability to actively manage and direct their own learning processes. This result suggests that achievement gains are not solely a function of instructional quality, but are strongly influenced by how learners plan, monitor, and evaluate their learning activities across academic tasks. In a complementary manner, Mashhadi and Ghanizadeh [90] show that self-regulated learning enhances academic achievement by enabling students to manage motivational and emotional factors that support persistence and goal attainment. Evidence from Wolters and Hussain [70] depicts that self-regulated learning contributes to academic achievement by strengthening effort regulation and persistence, particularly in demanding academic situations. From a curriculum-alignment perspective, Ergen and Kanadlı [38] indicate that self-regulated learning supports academic achievement by helping students align study strategies with assessment requirements. Finally, Lim et al. [39] provide additional support by showing that self-regulated learning facilitates academic achievement through the integration of cognitive and motivational resources over time.
Crucially, the results concluded that perceived usefulness and self-regulated learning partially mediate the link between ChatGPT-assisted learning effectiveness and academic achievement. This mediating structure provides a more refined explanation of how AI-assisted learning influences performance, challenging simplistic narratives of direct technological impact. The partial nature of the mediation suggests that while effectiveness has a direct association with achievement, a substantial portion of its influence is transmitted through students’ cognitive evaluations and behavioral regulation. This finding is particularly important in light of prior studies that report mixed or modest effects of AI tools on academic performance, as it helps explain why such effects may vary across learners and contexts. By demonstrating that effectiveness operates through both perceived usefulness and self-regulated learning, the study offers a theoretically grounded account of how AI-supported learning environments translate technological potential into academic outcomes.

6. Theoretical Implications

The present study contributes to the literature by repositioning the Technology Acceptance Model (TAM) within an AI-assisted learning performance context. While TAM has traditionally been applied to explain users’ intentions and technology adoption behaviors, the current findings demonstrate that its core performance-related belief—perceived usefulness—can also function as a meaningful explanatory mechanism for academic outcomes in higher education. Rather than applying the full TAM structure, which traditionally incorporates both perceived usefulness and perceived ease of use, the present study adopts a TAM-informed perspective that emphasizes the performance-related role of perceived usefulness in AI-supported learning environments. Rather than extending TAM by incorporating all of its traditional components, the study builds on its foundational logic to clarify how acceptance-related beliefs acquire academic relevance in AI-supported learning environments.
More specifically, the findings refine the role of perceived usefulness by situating it within a learning-performance framework. In conventional TAM applications, perceived usefulness primarily predicts behavioral intention and system use. In contrast, the present study demonstrates that within AI-assisted education, perceived usefulness also exerts a direct influence on academic achievement and mediates the relationship between perceived system effectiveness and learning outcomes. This repositioning reflects a contextual adaptation of TAM’s performance-related logic to explain educational outcomes rather than technology adoption alone. These repositioning advances a more performance-oriented interpretation of perceived usefulness without departing from TAM’s theoretical foundations.
In addition, the study integrates self-regulated learning as a complementary behavioral mechanism operating alongside perceived usefulness. Rather than treating self-regulated learning as an independent theoretical framework detached from technology acceptance, the findings indicate that regulatory learning behaviors represent an important pathway through which students operationalize their evaluations of ChatGPT-assisted learning. In this sense, the model does not redefine TAM but clarifies how cognitive evaluations and learning regulation can jointly explain performance outcomes in AI-mediated educational settings.
Importantly, the identification of perceived usefulness and self-regulated learning as partial mediators highlights that AI-assisted learning effectiveness does not translate into academic achievement automatically. Instead, its influence operates through performance-relevant cognitive and behavioral processes. By empirically demonstrating these mediation mechanisms within a unified explanatory framework, the study contributes to the growing AI-in-education literature by moving beyond purely adoption-focused or descriptive approaches toward a mechanism-based understanding of how AI tools generate academic value.
Overall, the theoretical contribution lies not in redefining TAM but in applying and refining its core performance-related logic within a generative AI context. The findings show that acceptance-related beliefs and structured learning regulation can operate together to explain how ChatGPT-assisted learning effectiveness contributes to academic achievement, thereby strengthening the theoretical coherence of AI-assisted education research in higher education contexts.

7. Practical Implications

The practical implications of this study extend beyond a specific discipline and speak directly to how ChatGPT and similar AI-based systems should be integrated into contemporary educational practice. The findings clearly indicate that the educational value of ChatGPT does not arise from access, novelty, or frequency of use, but from its perceived effectiveness in supporting learning, shaping learners’ perceptions of usefulness, and fostering self-regulated learning behaviors. This has important consequences for how educational institutions conceptualize and operationalize AI integration across curricula.
Firstly, the demonstrated effect of ChatGPT-assisted learning effectiveness on academic achievement suggests that institutions should shift their focus from regulating AI usage to designing for learning effectiveness. Rather than debating whether ChatGPT should be allowed or restricted, educators and academic leaders should prioritize how it can be embedded into learning activities in ways that directly support understanding, application, and academic task completion. This implies a move away from generic AI policies toward instructional design strategies that align ChatGPT with clearly defined learning outcomes and assessment criteria.
Secondly, the strong relationship between effectiveness and perceived usefulness highlights that students’ evaluations of AI tools are experience-driven rather than policy-driven. From a practical standpoint, this means that perceived usefulness cannot be imposed through institutional endorsement or assumed through technological sophistication. Instead, educators should deliberately demonstrate the academic value of ChatGPT through structured learning tasks, guided examples, and reflective activities that make its contribution to learning visible. When students repeatedly experience ChatGPT as a tool that enhances their academic performance, perceptions of usefulness become grounded in learning outcomes rather than convenience or shortcut-seeking behavior.
The findings related to self-regulated learning carry particularly important implications for educational practice. As learning environments increasingly emphasize autonomy, lifelong learning, and independent inquiry, the ability of students to regulate their own learning has become a critical educational objective. The results indicate that ChatGPT-assisted learning can support this objective when it is used to scaffold planning, monitoring, and reflection rather than to replace cognitive effort. Educators can operationalize this insight by designing assignments that require iterative engagement with ChatGPT, justification of its use, and reflection on how it informed learning decisions. Such practices transform AI-supported learning into a driver of self-regulated learning rather than a source of dependency.
The partial mediating roles of perceived usefulness and self-regulated learning further suggest that unstructured or unguided adoption of ChatGPT is unlikely to maximize its educational benefits. Institutions should therefore invest in faculty development initiatives that focus not only on technical familiarity with AI tools, but on pedagogical strategies that harness their effectiveness to support meaningful learning behaviors. Training programs should emphasize how ChatGPT can be used to support explanation, conceptual clarification, and exploratory learning while maintaining academic rigor and integrity.
At the institutional level, these findings support the development of balanced, evidence-based AI integration frameworks. Rather than adopting overly restrictive bans or permissive laissez-faire approaches, educational institutions should articulate clear expectations for pedagogically sound AI use that align with learning objectives and assessment design. By grounding AI integration in effectiveness, perceived usefulness, and self-regulated learning, institutions can move toward a more intentional and learning-centered use of ChatGPT that enhances academic achievement while preserving the core educational mission.

8. Limitations and Future Research

Despite the contributions of this study, several limitations should be acknowledged, as they provide important directions for future research rather than detracting from the value of the findings. Firstly, the study relied on a sample drawn from students enrolled in the faculties of tourism and hotels in Egypt. More specifically, the empirical data were collected exclusively from students enrolled in hospitality and tourism programs at public universities. While this context is theoretically meaningful, given the increasing emphasis on applied learning, autonomy, and industry-relevant knowledge in tourism education, the findings should therefore be interpreted within the context of public higher education and may not fully represent students enrolled in private universities or other educational systems. Educational systems differ in terms of pedagogical practices, assessment structures, and students’ familiarity with AI-based tools, which may influence how ChatGPT-assisted learning is perceived and utilized. Moreover, the exclusive focus on public universities may limit the representativeness of the findings, as students enrolled in private institutions may differ in terms of socio-economic background, technological exposure, and learning environments. Future studies are therefore encouraged to conduct comparative analyses between public and private higher education institutions to examine potential structural differences in AI-assisted learning adoption and outcomes. Future studies are therefore encouraged to replicate and extend the proposed model across different academic disciplines and cultural contexts to examine the robustness of the relationships identified in this study. Cross-national investigations would further enhance external validity by testing the stability of the proposed relationships across diverse educational systems and regulatory environments.
Secondly, the cross-sectional nature of the research design constrains the ability to draw conclusions about the dynamic evolution of perceived usefulness, self-regulated learning, and academic achievement over time. Theoretical arguments suggest that students’ perceptions and learning behaviors develop through repeated interaction with AI-supported learning environments. Longitudinal research designs would allow future studies to capture how the effectiveness of ChatGPT-assisted learning influences belief formation, self-regulatory practices, and academic outcomes across multiple learning stages or academic terms. Such designs would provide deeper insight into the sustainability of the observed effects and the potential for long-term learning transformation.
Thirdly, academic achievement in the present study was examined as an outcome variable without differentiating between types of achievement or assessment formats. In addition, the construct was measured using students’ self-reported perceptions of their academic performance rather than objective academic indicators such as GPA or official grades, which may introduce potential perceptual bias. Given that AI-assisted learning may differentially affect conceptual understanding, analytical performance, and applied problem-solving, future research could benefit from employing more nuanced measures of academic performance. For instance, distinguishing between formative and summative assessments, or between individual and collaborative learning outcomes, may reveal more fine-grained patterns of influence associated with ChatGPT-assisted learning. Future studies are therefore encouraged to incorporate objective academic indicators alongside perceptual measures to provide a more comprehensive evaluation of students’ academic achievement.
In addition, although this study focused on perceived usefulness and self-regulated learning as key mediating mechanisms, other theoretically relevant variables were not examined. Because the study adopted a TAM-informed perspective that emphasizes the performance-related role of perceived usefulness in explaining academic outcomes, other traditional TAM constructs, such as perceived ease of use, were not included in the present model. Future research may consider incorporating constructs such as perceived ease of use, cognitive load, learning motivation, or critical thinking to further refine the explanatory power of the model. Exploring alternative or additional mediating and moderating mechanisms would help advance a more comprehensive understanding of how AI-assisted learning systems operate within diverse educational environments. Furthermore, incorporating qualitative approaches—such as semi-structured interviews, focus groups, or Delphi panels—could provide deeper interpretive insight into students’ lived experiences with AI-assisted learning and help triangulate quantitative findings. Mixed-method designs would strengthen explanatory depth and enhance methodological robustness.
Furthermore, while the study examined cognitive and behavioral mechanisms underlying ChatGPT-assisted learning effectiveness, it did not explicitly incorporate Ethical AI considerations. Issues such as data privacy, algorithmic bias, academic integrity, transparency, and responsible AI use represent important contextual dimensions within higher education. Although these factors were not the focus of the present model, they may influence students’ perceptions and engagement with AI-assisted learning systems. Future research could extend the proposed framework by integrating ethical governance constructs or examining how institutional AI policies shape learning-related outcomes. Additionally, future research should model demographic characteristics, AI usage intensity, digital literacy, and institutional AI policies as moderators or contextual contingencies rather than mere control variables. Multi-group or moderated mediated designs could clarify how these factors shape the strength and direction of the observed relationships.
Finally, the rapid evolution of generative AI technologies represents an inherent limitation for all empirical studies in this domain. As ChatGPT and similar systems continue to develop in terms of functionality, accuracy, and integration into educational platforms, students’ experiences and perceptions may also change. Future research should therefore adopt adaptive research designs that account for technological advancement, including comparative studies across different AI tools or versions, to ensure that theoretical explanations remain aligned with evolving educational realities.

9. Conclusions

This study investigated how ChatGPT-assisted learning effectiveness influences academic achievement among hospitality and tourism students in Egyptian public universities through the mediating roles of perceived usefulness and self-regulated learning. The empirical findings should therefore be interpreted within the context of students enrolled in public higher education institutions within this academic field. Based on data collected from 689 students and analyzed using PLS-SEM, the findings provide strong empirical support for the proposed model. The results demonstrate that ChatGPT-assisted learning effectiveness exerts a significant direct effect on academic achievement [β = 0.386, p < 0.001]. In addition, it strongly predicts perceived usefulness [β = 0.673, p < 0.001] and self-regulated learning [β = 0.707, p < 0.001], indicating that effective engagement with ChatGPT substantially enhances both students’ performance-oriented beliefs and their capacity for autonomous learning. Both perceived usefulness [β = 0.281, p < 0.001] and self-regulated learning [β = 0.220, p = 0.016] significantly contribute to academic achievement, confirming their central explanatory roles within the model. Mediation analyses further reveal that ChatGPT-assisted learning effectiveness enhances academic achievement indirectly through perceived usefulness [β = 0.156, p = 0.018] and self-regulated learning [β = 0.189, p < 0.001]. These findings indicate that AI-driven learning tools improve performance not merely through direct exposure, but through strengthening students’ belief in the tool’s educational value and fostering structured self-regulatory engagement. The study refines the application of TAM’s performance-related logic within generative AI–supported learning environments by demonstrating how acceptance-related beliefs and regulatory learning behaviors jointly contribute to academic outcomes. Overall, the findings highlight that the effectiveness of AI-assisted learning translates into academic outcomes through the interplay between performance-related beliefs and learning regulation mechanisms, reflecting the TAM-informed perspective underpinning this study. Practically, the results suggest that educational institutions should focus not only on providing access to AI tools but also on promoting meaningful use, perceived usefulness, and learner autonomy to maximize academic benefits. Collectively, the findings underscore the performance-enhancing potential of ChatGPT-assisted learning within hospitality and tourism higher education.

Author Contributions

Conceptualization, B.S.A.-R. and A.M.H.; methodology, A.M.H.; software, A.M.H.; validation, A.M.H. and B.S.A.-R.; formal analysis, A.M.H.; investigation B.S.A.-R. and A.M.H.; resources, B.S.A.-R. and A.M.H.; data curation, B.S.A.-R. and A.M.H.; writing—original draft preparation, A.M.H. and B.S.A.-R.; writing—review and editing, A.M.H. and B.S.A.-R.; visualization, A.M.H. and B.S.A.-R.; supervision, A.M.H.; project administration, A.M.H.; funding acquisition, A.M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia, grant number [KFU260458].

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Deanship of Scientific Research Ethical Committee, King Faisal University (project number: KFU260458, date of approval: 1 November 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

Data are available upon request from researchers who meet the eligibility criteria. Kindly contact the first author privately through email.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Measurement Scales

ChatGPT-Assisted Learning EffectivenessCALE-1Using ChatGPT helps me understand course materials more effectively.
CALE-2ChatGPT improves the efficiency of my learning process.
CALE-3ChatGPT enhances my engagement with academic content.
CALE-4ChatGPT helps me complete learning tasks more quickly.
CALE-5ChatGPT supports deeper comprehension of complex topics.
Perceived UsefulnessPU-1Using ChatGPT increases my overall academic performance.
PU-2ChatGPT is a useful tool for completing academic tasks.
PU-3ChatGPT improves my productivity in learning activities.
PU-4ChatGPT contributes positively to my academic outcomes.
Self-Regulated LearningSRL-1I set specific goals for my learning when I use ChatGPT.
SRL-2I keep track of my progress while using ChatGPT for studying.
SRL-3I changed my learning strategies based on the feedback I get from ChatGPT.
SRL-4I think about my understanding and comprehension while learning with ChatGPT.
SRL-5I evaluate how effective my study sessions are when I use ChatGPT.
Academic AchievementAA-1Using ChatGPT helps me achieve higher grades in my courses.
AA-2Using ChatGPT improves my scores on exams and assessments.
AA-3ChatGPT assists me in mastering the course material more effectively.
AA-4Using ChatGPT positively impacts my measurable academic results.

References

  1. Hasanein, A. Responses to the AI Revolution in Hospitality and Tourism Higher Education: The Perception of Students Towards Accepting and Using Microsoft Copilot. Eur. J. Investig. Health Psychol. Educ. 2025, 15, 35. [Google Scholar] [CrossRef]
  2. Bansal, G.; Chamola, V.; Hussain, A.; Guizani, M.; Niyato, D. Transforming conversations with AI—A comprehensive study of ChatGPT. Cogn. Comput. 2024, 16, 2487–2510. [Google Scholar] [CrossRef]
  3. Lee, H.; Chen, P.; Wang, W.; Huang, Y.; Wu, T. Empowering ChatGPT with guidance mechanism in blended learning: Effect of self-regulated learning, higher-order thinking skills, and knowledge construction. Int. J. Educ. Technol. High. Educ. 2024, 21, 16. [Google Scholar] [CrossRef]
  4. Sobaih, A.; Elshaer, I.; Hasanein, A. Examining students’ acceptance and use of ChatGPT in Saudi Arabian higher education. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 709–721. [Google Scholar] [CrossRef]
  5. ElSayary, A. An investigation of teachers’ perceptions of using ChatGPT as a supporting tool for teaching and learning in the digital era. J. Comput. Assist. Learn. 2024, 40, 931–945. [Google Scholar] [CrossRef]
  6. Doyle, T. Helping Students Learn in a Learner-Centered Environment: A Guide to Facilitating Learning in Higher Education; Taylor & Francis: Abingdon, UK, 2023. [Google Scholar]
  7. Xu, J.; Chung, K.; Zhang, X.; Tavitiyaman, P. Fostering work-integrated learning in hospitality and tourism: An integration of Leximancer and students’ self-reflective statement approaches. J. China Tour. Res. 2022, 18, 1330–1354. [Google Scholar] [CrossRef]
  8. Schönberger, M. Integrating artificial intelligence in higher education: Enhancing interactive learning experiences and student engagement through ChatGPT. In The Evolution of Artificial Intelligence in Higher Education: Challenges, Risks, and Ethical Considerations; Emerald Publishing Limited.: Leed, UK, 2024; pp. 11–34. [Google Scholar]
  9. Watson, S.; Romic, J. ChatGPT and the entangled evolution of society, education, and technology: A systems theory perspective. Eur. Educ. Res. J. 2025, 24, 205–224. [Google Scholar] [CrossRef]
  10. Al-Abri, A. Exploring ChatGPT as a virtual tutor: A multi-dimensional analysis of large language models in academic support. Educ. Inf. Technol. 2025, 30, 17447–17482. [Google Scholar] [CrossRef]
  11. Al-Abdullatif, A.; Alsubaie, M. ChatGPT in learning: Assessing students’ use intentions through the lens of perceived value and the influence of AI literacy. Behav. Sci. 2024, 14, 845. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, Z.; Yin, Z.; Zheng, Y.; Li, X.; Zhang, L. Will graduate students engage in unethical uses of GPT? An exploratory study to understand their perceptions. Educ. Technol. Soc. 2025, 28, 286–300. [Google Scholar]
  13. Wang, Z.; Zou, D.; Zhang, R.; Lee, L.; Xie, H.; Wang, F. ChatGPT-enhanced self-regulated learning in programming education: Impacts on motivation, self-efficacy, and learning outcomes. Interact. Learn. Environ. 2025, 1–26. [Google Scholar] [CrossRef]
  14. Zhang, J.; Derakhshan, A. Integrating ‘GPT-4o’as an AI Tool Into K-12 Contexts: Assessing Its Long-Term Effects on Students’ Self-Regulated Learning (SRL) and Task Engagement Through Latent Growth Curve Modelling. Eur. J. Educ. 2025, 60, e70329. [Google Scholar] [CrossRef]
  15. Ng, D.; Tan, C.; Leung, J. Empowering student self-regulated learning and science education through ChatGPT: A pioneering pilot study. Br. J. Educ. Technol. 2024, 55, 1328–1353. [Google Scholar] [CrossRef]
  16. Khairy, H.; Al-Romeedy, B. The Effect of Hospitality and Tourism Programs’ Student Need for Achievement on Academic Resilience: The Moderating Roles of Mindfulness and Cyberloafing. J. Hosp. Tour. Educ. 2024, 38, 32–43. [Google Scholar] [CrossRef]
  17. Liu, H.; Li, T.; Zheng, H.; Li, Y.; Fan, J. Exploring the relationship between students’ language learning curiosity and academic achievement: The mediating role of foreign language anxiety. Behav. Sci. 2025, 15, 1133. [Google Scholar] [CrossRef]
  18. Liu, X.; Xiao, Y.; Li, D. Assessing strategic use of artificial intelligence in self-regulated learning: Instrument development and evidence from Chinese university students. Int. J. Educ. Technol. High. Educ. 2025, 22, 69. [Google Scholar] [CrossRef]
  19. Liu, Z.; Zuo, H.; Lu, Y. The impact of chatgpt on students’ academic achievement: A meta-analysis. J. Comput. Assist. Learn. 2025, 41, e70096. [Google Scholar] [CrossRef]
  20. Howlader, M.; Tohan, M.; Zaman, S.; Chanda, S.; Jiaxin, G.; Rahman, M. Factors influencing the acceptance and usage of ChatGPT as an emerging learning tool among higher education students in Bangladesh: A structural equation modeling. Cogent Educ. 2025, 12, 2504224. [Google Scholar] [CrossRef]
  21. Tummalapenta, S.; Pasupuleti, R.; Chebolu, R.; Banala, T.; Thiyyagura, D. Factors driving ChatGPT continuance intention among higher education students: Integrating motivation, social dynamics, and technology adoption. J. Comput. Educ. 2025, 12, 1207–1230. [Google Scholar] [CrossRef]
  22. Cheng, C.; Mendoza, N.; Yan, Z. Collaborative Lesson Planning Influences Teachers’ Self-Regulated Learning Instruction (SRL-I): The Mediating Role of Perceived Benefits of SRL. Eur. J. Educ. 2025, 60, e70303. [Google Scholar] [CrossRef]
  23. Jin, X.; Jiang, Q.; Xiong, W.; Pan, X.; Zhao, W. Using the online self-directed learning environment to promote creativity performance for university students. Educ. Technol. Soc. 2022, 25, 130–147. [Google Scholar]
  24. Alzahrani, L. Analyzing students’ attitudes and behavior toward artificial intelligence technologies in higher education. Int. J. Recent Technol. Eng. (IJRTE) 2023, 11, 65–73. [Google Scholar] [CrossRef]
  25. George, B.; Wooden, O. Managing the strategic transformation of higher education through artificial intelligence. Adm. Sci. 2023, 13, 196. [Google Scholar] [CrossRef]
  26. Arora, A.K.; Kumar, A.; Tyagi, R.; Sharma, S.; Kumar, A.; Kumar, S. Future of education with AI-assisted technologies. In Artificial Intelligence in Peace, Justice, and Strong Institutions; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 169–190. [Google Scholar]
  27. Shih, H.C.J. What does engagement tell us about low-achieving learners’ motivational changes in a mobile-assisted personalized SRL training program? Interact. Technol. Smart Educ. 2026, 23, 1–24. [Google Scholar] [CrossRef]
  28. Olayinka, S.; Jacob, S.; Ariya, D. Effects of Chat GPT assisted learning technique on Upper Basic II students’ interest and achievement in Social Studies in Jos North, Plateau State, Nigeria. Matondang J. 2025, 4, 113–124. [Google Scholar]
  29. Hsiao, C.; Chiu, C. A Comprehensive Analysis of ChatGPT-Assisted Learning: A Systematic Exploration from Learning Outcomes to Student Behavior Patterns. In 2025 5th International Conference on Artificial Intelligence and Education (ICAIE); IEEE: New York, NY, USA, 2025; pp. 423–428. [Google Scholar]
  30. Eljack, N.; Altahir, N.; Mohammed, N.; Osman, S. ChatGPT as an AI Assistant Tool to Enhance University Students’ Teaching, Assessment and Academic Achievements: A Case Study of an EAP Lecturer. J. Lang. Teach. Res. 2025, 16, 1813–1823. [Google Scholar] [CrossRef]
  31. Cai, Q.; Lin, Y.; Yu, Z. Factors influencing learner attitudes towards ChatGPT-assisted language learning in higher education. Int. J. Hum.–Comput. Interact. 2024, 40, 7112–7126. [Google Scholar] [CrossRef]
  32. Moghavvemi, S.; Jam, F. Unraveling the influential factors driving persistent adoption of ChatGPT in learning environments. Educ. Inf. Technol. 2025, 30, 22443–22470. [Google Scholar] [CrossRef]
  33. Zhuo, Z.; Li, D.; Chen, J.; Chen, X.; Wang, S. Exploring Factors Influencing ChatGPT-Assisted Learning Satisfaction from an Information Systems Success Model Perspective: The Case of Art and Design Students. Systems 2025, 14, 7. [Google Scholar] [CrossRef]
  34. Dzimar, M.; Maulida, N.; Jannah, L.; Rahmah, R. Relationship Between ChatGPT Use and EFL Students’ Perceived Usefulness, Ease, Dependence, and Creativity. Engl. Educ. Lit. J. 2026, 6, 75–87. [Google Scholar]
  35. Russell, J.; Baik, C.; Ryan, A.; Molloy, E. Fostering self-regulated learning in higher education: Making self-regulation visible. Act. Learn. High. Educ. 2022, 23, 97–113. [Google Scholar] [CrossRef]
  36. Zhu, M. Leveraging ChatGPT to Support Self-Regulated Learning in Online Courses. TechTrends 2025, 69, 914–924. [Google Scholar] [CrossRef]
  37. Zhou, L.; Li, J. The impact of ChatGPT on learning motivation: A study based on self-determination theory. Educ. Sci. Manag. 2023, 1, 19–29. [Google Scholar] [CrossRef]
  38. Ergen, B.; Kanadlı, S. The effect of self-regulated learning strategies on academic achievement: A meta-analysis study. Eurasian J. Educ. Res. 2017, 17, 55–74. [Google Scholar] [CrossRef]
  39. Lim, C.; Ab Jalil, H.; Marof, A.; Saad, W. Peer learning, self-regulated learning and academic achievement in blended learning courses: A structural equation modeling approach. Int. J. Emerg. Technol. Learn. (IJET) 2020, 15, 110–125. [Google Scholar] [CrossRef]
  40. Wu, J.; Shen, W.; Lin, L.; Greenes, R.; Bates, D. Testing the technology acceptance model for evaluating healthcare professionals’ intention to use an adverse event reporting system. Int. J. Qual. Health Care 2008, 20, 123–129. [Google Scholar] [CrossRef] [PubMed]
  41. Hornbæk, K.; Hertzum, M. Technology acceptance and user experience: A review of the experiential component in HCI. ACM Trans. Comput.-Hum. Interact. (TOCHI) 2017, 24, 1–30. [Google Scholar] [CrossRef]
  42. Raitskaya, L.; Tikhonova, E. Enhancing critical thinking skills in ChatGPT-human interaction: A scoping review. J. Lang. Educ. 2025, 11, 5–19. [Google Scholar] [CrossRef]
  43. Moradi, H. Integrating AI in higher education: Factors influencing ChatGPT acceptance among Chinese university EFL students. Int. J. Educ. Technol. High. Educ. 2025, 22, 30. [Google Scholar] [CrossRef]
  44. Apriani, E.; Daulay, S.; Aprilia, F.; Marzuki, A.; Warsah, I.; Supardan, D. A Mixed-Method Study on the Effectiveness of Using ChatGPT in Academic Writing and Students’ Perceived Experiences. J. Lang. Educ. 2025, 11, 26–45. [Google Scholar] [CrossRef]
  45. Hasanein, A.; Sobaih, A. Drivers and consequences of ChatGPT use in higher education: Key stakeholder perspectives. Eur. J. Investig. Health Psychol. Educ. 2023, 13, 2599–2614. [Google Scholar] [CrossRef]
  46. Isiaku, L.; Muhammad, A.; Kefas, H.; Ukaegbu, F. Enhancing technological sustainability in academia: Leveraging ChatGPT for teaching, learning and evaluation. Qual. Educ. All 2024, 1, 385–416. [Google Scholar] [CrossRef]
  47. Al-Emran, M.; Shaalan, K. Recent Advances in Technology Acceptance Models and Theories; Springer: Cham, Switzerland, 2021. [Google Scholar]
  48. Fong, C.; Gonzales, C.; Hill-Troglin Cox, C.; Shinn, H. Academic help-seeking and achievement of postsecondary students: A meta-analytic investigation. J. Educ. Psychol. 2023, 115, 1–21. [Google Scholar] [CrossRef]
  49. Ba, S.; Zhan, Y.; Huang, L.; Lu, G. Investigating the impact of ChatGPT-assisted feedback on the dynamics and outcomes of online inquiry-based discussion. Br. J. Educ. Technol. 2025, 56, 1710–1734. [Google Scholar] [CrossRef]
  50. El-Akhras, H.; Abd El-Wahab, M.; Saghier, E.; Selem, K. ChatGPT adoption risks and cognitive achievement among tourism and hospitality college students: From faculty member perspective. J. Hosp. Tour. Insights 2025, 8, 1288–1307. [Google Scholar] [CrossRef]
  51. Wu, B.; Chen, X. Continuance intention to use MOOCs: Integrating the technology acceptance model (TAM) and task technology fit (TTF) model. Comput. Hum. Behav. 2017, 67, 221–232. [Google Scholar] [CrossRef]
  52. Berezi, I. Virtual Learning Environment: Redefining Higher Educational Delivery for Efficiency and Accessibility. Int. J. Educ. Manag. Rivers State Univ. 2025, 1, 451–467. [Google Scholar]
  53. Barakat, M.; Salim, N.; Sallam, M. University educators perspectives on ChatGPT: A technology acceptance model-based study. Open Prax. 2025, 17, 129–144. [Google Scholar] [CrossRef]
  54. Seli, H. Motivation and Learning Strategies for College Success: A Focus on Self-Regulated Learning; Routledge: Abingdon, UK, 2019. [Google Scholar]
  55. Alshahrani, K.; Qureshi, R. Review the prospects and obstacles of AI-enhanced learning environments: The role of ChatGPT in education. Int. J. Mod. Educ. Comput. Sci. 2024, 16, 71–86. [Google Scholar] [CrossRef]
  56. Rath, A. Leveraging ChatGPT to support terminology learning in oral anatomy: A mixed-methods study among linguistically diverse dental students. BMC Med. Educ. 2025, 25, 1425. [Google Scholar] [CrossRef]
  57. Rana, V.; Verhoeven, B.; Sharma, M. Generative AI in design thinking pedagogy: Enhancing creativity, critical thinking, and ethical reasoning in higher education. J. Univ. Teach. Learn. Pract. 2025, 22, 1–22. [Google Scholar] [CrossRef]
  58. Linus, A.; Aladesusi, G.; Monsur, I.; Elizabeth, F. Perceived usefulness, ease of use, and intention to utilize online tools for learning among college of education students. Indones. J. Multidiciplinary Res. 2025, 5, 41–52. [Google Scholar] [CrossRef]
  59. Wong, J.T.; Hughes, B.S. Leveraging learning experience design: Digital media approaches to influence motivational traits that support student learning behaviors in undergraduate online courses. J. Comput. High. Educ. 2023, 35, 595–632. [Google Scholar] [CrossRef] [PubMed]
  60. Hussain, F.; Anwar, M. Towards informed policy decisions: Assessing student perceptions and intentions to use ChatGPT for academic performance in higher education. J. Asian Public Policy 2025, 18, 377–404. [Google Scholar] [CrossRef]
  61. Onal, S.; Kulavuz-Onal, D.; Childers, M. Patterns of ChatGPT Usage and Perceived Benefits on Academic Performance Across Disciplines: Insights from a Survey of Higher Education Students in the United States. J. Educ. Technol. Syst. 2025, 54, 34–65. [Google Scholar] [CrossRef]
  62. Shahraniza, T.; Abubaker, Y. Influences of perceived usefulness and perceived ease of use on academic achievement: Mediating role of motivation. Rev. Conrado 2025, 21, e4718. [Google Scholar]
  63. Al-Abdi, B.; Badr, A.; Kasim, R.; Ali, F. Exploring the influence of social media information quality, and perceived usefulness on academic achievement: The mediating role of usage of social media. Int. J. Learn. Technol. 2025, 20, 189–211. [Google Scholar] [CrossRef]
  64. Klarin, J.; Hoff, E.; Larsson, A.; Daukantaitė, D. Adolescents’ use and perceived usefulness of generative AI for schoolwork: Exploring their relationships with executive functioning and academic achievement. Front. Artif. Intell. 2024, 7, 1415782. [Google Scholar] [CrossRef]
  65. Yeh, Y.; Kwok, O.; Chien, H.; Sweany, N.; Baek, E.; McIntosh, W. How College Students’ Achievement Goal Orientations Predict Their Expected Online Learning Outcome: The Mediation Roles of Self-Regulated Learning Strategies and Supportive Online Learning Behaviors. Online Learn. 2019, 23, 23–41. [Google Scholar] [CrossRef]
  66. Chen, E.; Heritage, M.; Lee, J. Identifying and monitoring students’ learning needs with technology. In Transforming Data into Knowledge; Routledge: Abingdon, UK, 2024; pp. 309–332. [Google Scholar]
  67. Estévez, I.; Rodríguez-Llorente, C.; Piñeiro, I.; González-Suárez, R.; Valle, A. School engagement, academic achievement, and self-regulated learning. Sustainability 2021, 13, 3011. [Google Scholar] [CrossRef]
  68. Akintayo, O.; Eden, C.; Ayeni, O.; Onyebuchi, N. Evaluating the impact of educational technology on learning outcomes in the higher education sector: A systematic review. Int. J. Manag. Entrep. Res. 2024, 6, 1395–1422. [Google Scholar] [CrossRef]
  69. Wu, X.Y. Unveiling the dynamics of self-regulated learning in project-based learning environments. Heliyon 2024, 10, e27335. [Google Scholar] [CrossRef]
  70. Wolters, C.; Hussain, M. Investigating grit and its relations with college students’ self-regulated learning and academic achievement. Metacognition Learn. 2015, 10, 293–311. [Google Scholar] [CrossRef]
  71. Wei, H.; Peng, H.; Chou, C. Can more interactivity improve learning achievement in an online course? Effects of college students’ perception and actual use of a course-management system on their learning achievement. Comput. Educ. 2015, 83, 10–21. [Google Scholar] [CrossRef]
  72. Aslam, M.S.; Nisar, S. Artificial Intelligence Applications Using ChatGPT in Education: Case Studies and Practices: Case Studies and Practices; IGI Global: Hershey, PA, USA, 2023. [Google Scholar]
  73. Ma, Y.; Zuo, M.; Gao, R.; Yan, Y.; Luo, H. Interrelationships among college students’ perceptions of smart classroom environments, perceived usefulness of mobile technology, achievement emotions, and cognitive engagement. Behav. Sci. 2024, 14, 565. [Google Scholar] [CrossRef]
  74. Duong, M.; Pullmann, M.; Buntain-Ricklefs, J.; Lee, K.; Benjamin, K.; Nguyen, L.; Cook, C. Brief teacher training improves student behavior and student–teacher relationships in middle school. Sch. Psychol. 2019, 34, 212. [Google Scholar] [CrossRef] [PubMed]
  75. Wan, Y.; Li, R.; Li, W.; Du, H. Impact pathways of AI-supported instruction on learning behaviors, competence development, and academic achievement in engineering education. Sustainability 2025, 17, 8059. [Google Scholar] [CrossRef]
  76. Wang, Q. Designing Technology-Mediated Learning Environments- Perspectives, Processes, and Applications; Springer: Berlin/Heidelberg, Germany, 2024. [Google Scholar]
  77. Hasanein, A.; Sobaih, A.; Elshaer, I. Examining Google Gemini’s acceptance and usage in higher education. J. Appl. Learn. Teach. 2024, 7, 223–231. [Google Scholar] [CrossRef]
  78. Cleary, T. The Self-Regulated Learning Guide: Teaching Students to Think in the Language of Strategies; Routledge: Abingdon, UK, 2018. [Google Scholar]
  79. Chen, M.; Wu, X. Attributing academic success to giftedness and its impact on academic achievement: The mediating role of self-regulated learning and negative learning emotions. Sch. Psychol. Int. 2021, 42, 170–186. [Google Scholar] [CrossRef]
  80. Youssef, E.; Medhat, M.; Abdellatif, S.; Al Malek, M. Examining the effect of ChatGPT usage on students’ academic learning and achievement: A survey-based study in Ajman, UAE. Comput. Educ. Artif. Intell. 2024, 7, 100316. [Google Scholar] [CrossRef]
  81. Nunnally, J.; Bernstein, I. Psychometric Theory McGraw-Hill Series, 3rd ed.; Psychology: New York, NY, USA, 1994. [Google Scholar]
  82. Hair, J.; Matthews, L.; Matthews, R.; Sarstedt, M. PLS-SEM or CB-SEM: Updated guidelines on which method to use. Int. J. Multivar. Data Anal. 2017, 1, 107. [Google Scholar] [CrossRef]
  83. Leguina, A. A primer on partial least squares structural equation modeling (PLS-SEM). Int. J. Res. Method Educ. 2015, 38, 220–221. [Google Scholar] [CrossRef]
  84. Henseler, J.; Ringle, C.; Sinkovics, R. The use of partial least squares path modeling in international marketing. In New Challenges to International Marketing; Emerald Group Publishing Limited.: Leed, UK, 2009; pp. 277–319. [Google Scholar]
  85. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.-Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003, 88, 879–903. [Google Scholar] [CrossRef] [PubMed]
  86. Fornell, C.; Larcker, D. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  87. Kock, N. Common Method Bias in PLS-SEM: A Full Collinearity Assessment Approach. Int. J. e-Collab. (IJeC) 2015, 11, 1–10. [Google Scholar] [CrossRef]
  88. Chin, W. Commentary: Issues and Opinion on Structural Equation Modeling; JSTOR: New York, NY, USA, 1998. [Google Scholar]
  89. Yu, Q.; Yu, K.; Yang, R.; Li, B. Can ChatGPT as a generative AI tool enhance university student academic achievement? Innov. Educ. Teach. Int. 2025. [Google Scholar] [CrossRef]
  90. Mashhadi, R.; Ghanizadeh, A. Delaying or Gratifying Immediate Rewards in Language Classes? Scaffolding Grit, Self-Regulation, and Language Learning. J. Forensic Psychol. Res. Pract. 2025. [Google Scholar] [CrossRef]
Figure 1. Research Conceptual Model.
Figure 1. Research Conceptual Model.
Information 17 00303 g001
Figure 2. Research final model.
Figure 2. Research final model.
Information 17 00303 g002
Table 1. Summary of Key Studies on ChatGPT, TAM, Self-Regulated Learning, and Academic Achievement.
Table 1. Summary of Key Studies on ChatGPT, TAM, Self-Regulated Learning, and Academic Achievement.
Author(s)Research FocusKey Findings
[1,24]AI adoption and student attitudesStudents’ engagement with AI tools is primarily shaped by cognitive perceptions rather than objective system features.
[3,4]ChatGPT in higher educationChatGPT enhances engagement and perceived learning support; outcome mechanisms remain underexplored.
[17,18,19,28]Effectiveness of generative AI toolsAcademic benefits emerge when AI tools are effectively embedded within learning processes.
[29,30]Instructional value of ChatGPTEffectiveness influences performance-oriented beliefs beyond mere usage frequency.
[31,32]Perceived usefulness in AI learningSystem effectiveness operates as a key antecedent of perceived usefulness.
[33,34]Usefulness formationUsefulness develops through repeated experiential evaluation of learning benefits.
[15,35]Self-regulated learning in digital environmentsSRL significantly predicts academic performance in technology-supported contexts.
[12,36,37]AI tools and learner autonomyEffective AI use fosters planning, monitoring, and independent problem-solving behaviors.
[38,39]Self-regulated learning and achievementPlanning, monitoring, and regulating behaviors consistently enhance academic achievement.
Table 2. Constructs, Indicators, and Psychometric Properties.
Table 2. Constructs, Indicators, and Psychometric Properties.
Scale VariablesλVIF
ChatGPT-Assisted Learning Effectiveness: (α = 0.913, CR = 0.915, AVE = 0.743)
CALE-10.8592.569
CALE-20.8722.686
CALE-30.8532.552
CALE-40.8532.491
CALE-50.8722.676
Perceived Usefulness: (α = 0.877, CR = 0.882, AVE = 0.731)
PU-10.8632.389
PU-20.8562.122
PU-30.8762.402
PU-40.8231.965
Self-Regulated Learning (α = 0.875, CR = 0.886, AVE = 0.668)
SRL-10.7881.892
SRL-20.8772.651
SRL-30.7881.947
SRL-40.7841.887
SRL-50.8452.314
Academic achievement: (α = 0.885, CR = 0.888, AVE = 0.743)
AA-10.8842.546
AA-20.8362.026
AA-30.8452.191
AA-40.8822.514
Table 3. Discriminant Validity Assessment (Fornell–Larcker Criterion and HTMT Ratio).
Table 3. Discriminant Validity Assessment (Fornell–Larcker Criterion and HTMT Ratio).
AACALEPUSRL
AA0.862
CALE0.731 [0.810]0.862
PU0.672 [0.756]0.673 [0.747]0.855
SRL0.660 [0.742]0.707 [0.782]0.594 [0.669]0.817
Note: Bold ratios show the square root of AVE. Value in brackets shows the HTMT.
Table 4. Results of Hypotheses Testing.
Table 4. Results of Hypotheses Testing.
βT-Valuep-ValuesR2f2Q2SRMR95% CI
Bootstrapping
Direct Effect
(H1) CALE → AA0.3863.9460.000 ***0.5800.1900.3400.041[0.192, 0.561]
(H2) CALE → PU0.6739.2740.000 ***0.4500.7300.290[0.581, 0.742]
(H3) CALE → SRL0.70710.7340.000 ***0.5000.6900.310[0.621, 0.779]
(H4) PU → AA0.2813.8540.000 ***0.5800.1100.328[0.142, 0.417]
(H5) SRL → AA0.2202.4180.016 *0.5600.0700.336[0.041, 0.356]
Indirect Effect
(H6) CALE → PU → AA0.1892.3660.018 *---[0.031, 0.284]
(H7) CALE → SRL → AA0.1563.6990.000 ***---[0.102, 0.301]
Note: (p < 0.05 = *; p < 0.01 = **; p < 0.001 = ***).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hasanein, A.M.; Al-Romeedy, B.S. ChatGPT-Assisted Learning Effectiveness and Academic Achievement: A Mechanism-Based Model in Higher Education. Information 2026, 17, 303. https://doi.org/10.3390/info17030303

AMA Style

Hasanein AM, Al-Romeedy BS. ChatGPT-Assisted Learning Effectiveness and Academic Achievement: A Mechanism-Based Model in Higher Education. Information. 2026; 17(3):303. https://doi.org/10.3390/info17030303

Chicago/Turabian Style

Hasanein, Ahmed Mohamed, and Bassam Samir Al-Romeedy. 2026. "ChatGPT-Assisted Learning Effectiveness and Academic Achievement: A Mechanism-Based Model in Higher Education" Information 17, no. 3: 303. https://doi.org/10.3390/info17030303

APA Style

Hasanein, A. M., & Al-Romeedy, B. S. (2026). ChatGPT-Assisted Learning Effectiveness and Academic Achievement: A Mechanism-Based Model in Higher Education. Information, 17(3), 303. https://doi.org/10.3390/info17030303

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop