Next Article in Journal
The Sport Education Model in the Development of Psychosocial Factors in Primary School: A Systematic Review
Previous Article in Journal
Designing SecureAI Curriculum for National Security Needs: The Illinois Tech Program of Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

To Use but Not to Depend: Pedagogical Novelty and the Cognitive Brake of Ethical Awareness in Computer Science Students’ Adoption of Generative AI

1
Faculty of Applied Sciences, Macao Polytechnic University, Macau
2
Department of Childhood Education, Faculty of Education, University of the Free State, Bloemfontein 9301, South Africa
3
Center of Teacher Education, Minghsin University of Science and Technology, Hsinchu 304, Taiwan
*
Author to whom correspondence should be addressed.
Educ. Sci. 2026, 16(2), 311; https://doi.org/10.3390/educsci16020311
Submission received: 14 January 2026 / Revised: 4 February 2026 / Accepted: 10 February 2026 / Published: 13 February 2026

Abstract

The integration of Generative Artificial Intelligence (GenAI) into higher education represents a paradigm shift from static skill acquisition to dynamic, human–AI collaboration. However, the psychological mechanisms governing students’ adoption—specifically the interplay between pedagogical novelty, ethical awareness, and habit formation—remain underexplored. To address this, this study develops and implements a dynamic practical curriculum incorporating AI and ethical awareness, aiming to foster responsible behavioral patterns in computer programming education. Employing a quasi-experimental design, we implemented a 16-week dual-track instructional intervention (incorporating AI-integrated pedagogy and ethical scaffolding) for 148 computer science students. Structural Equation Modeling (SEM) was applied to test an extended UTAUT2 framework. The findings reveal three critical theoretical insights that redefine GenAI adoption: (1) The eclipse of utility: contrary to established models, traditional utilitarian drivers of performance expectancy (β = 0.076, p = 0.39) and effort expectancy (β = 0.125, p = 0.13) yielded non-significant effects on behavioral intention. This suggests that for digital natives, algorithmic efficiency has devolved into a baseline hygiene factor, losing its motivational power. (2) The dominance of pedagogical novelty: hedonic motivation emerged as the paramount predictor of both habit (β = 0.457, p < 0.001) and behavioral intention (β = 0.336, p = 0.001). This confirms that adoption is driven by the situational interest and interactional novelty inherent in the human–AI partnership. (3) The cognitive brake mechanism: ethical awareness exhibited a divergent regulatory role. While it significantly legitimized conscious behavioral intention (β = 0.166, p = 0.011), it showed a non-significant, negative association with habit (β = −0.032, p = 0.653). This demonstrates that ethical reasoning functions as a cognitive brake (system 2) and actively disrupts the formation of mindless, automated dependency (system 1). These results provide empirical evidence for a dual regulation model of AI adoption and suggest that sustainable education requires leveraging pedagogical novelty to drive engagement while utilizing ethical awareness to prevent blind habituation.

1. Introduction

The swift progress of artificial intelligence (AI) technologies has strongly influenced diverse industries. Education, especially in computer science (CS), is no exception. The integration of Generative Artificial Intelligence (GenAI) into CS pedagogy has precipitated a fundamental transformation in CS pedagogy (Zawacki-Richter et al., 2019; Shaukat et al., 2020; Prather et al., 2023). Unlike conventional instructional resources, typified by static textbooks and pre-recorded lectures, GenAI tools act as dynamic agents and are capable of providing real-time code explanation, debugging assistance, and personalized feedback (Becker et al., 2023; Long et al., 2025). For CS students, who have previously grappled with the high cognitive load and abstract nature of programming syntax through passive learning materials, the emergence of AI-integrated curricula represents a significant shift in the educational medium (Kabudi et al., 2021; Barbosa et al., 2024; Sari et al., 2024). While the potential of these tools to enhance learning is evident, the psychological mechanisms driving their adoption remain complex and underexplored, particularly regarding how students balance the appeal of technological novelty with the constraints of academic ethics.
Despite the rapid proliferation of GenAI, existing research utilizing the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) has predominantly focused on utilitarian drivers, such as performance expectancy and effort expectancy (Dwivedi et al., 2019; Tamilmani et al., 2021). However, recent scholarship suggests that for digital natives, the utility of digital tools is progressively taken for granted, representing a baseline expectation rather than a motivator (Kocielnik et al., 2019; Dwivedi et al., 2023). Instead, the primary driver may be shifting toward hedonic motivation, sparked by the pedagogical novelty of the AI medium itself. Compared to the static nature of traditional textbooks, AI-integrated materials offer a form of situational interest through interactive novelty and potentially override traditional efficiency metrics as the central determinant of habit formation and adoption behavior (Huang et al., 2022; Petrescu et al., 2023; Artemova, 2024; Robertson & Georgeon, 2025).
Furthermore, the introduction of AI into the curriculum brings ethical considerations to the forefront. In the context of programming, ethical training is often framed solely as a barrier designed to deter plagiarism (McIntire et al., 2024; Prashar et al., 2024). Nevertheless, ethical awareness may play a dual, regulatory role within the adoption process. On one hand, it legitimizes the intention to use AI by clarifying permissible boundaries. On the other hand, it may act as a cognitive brake that requires deliberative processing to inhibit the formation of mindless, automated habits (Oravec, 2023; Cotton et al., 2024; M. Li et al., 2024). Current literature has yet to empirically disentangle how ethical awareness differentially impacts conscious versus automated behavioral pathways. To address these theoretical gaps, this study adapts the UTAUT2 framework to investigate the adoption of AI-integrated programming materials with ethical awareness. Specifically, this research aims to answer the following questions:
  • RQ1: Does the pedagogical novelty of GenAI-integrated resources supersede traditional utilitarian performance expectancy and effort expectancy in shaping CS students’ adoption behaviors?
  • RQ2: How does ethical awareness differentially regulate the conscious intention to use AI versus the formation of automated habits, acting as a cognitive mechanism to prevent mindless dependency?
  • RQ3: Does the intrinsic enjoyment derived from AI interaction act as the primary catalyst for habit formation, mediating the relationship between pedagogical novelty and sustained usage intention?
By answering these questions, this study provides critical insights for educators seeking to leverage the engaging power of AI while cultivating responsible and reflective coding practices.

2. Literature Review

2.1. Generative AI as a Pedagogical Medium Beyond Static Resources

The landscape of CS education is undergoing a fundamental transition from static instructional resources to dynamic, generative agents. Traditionally, novices have relied on textbooks and online forums, resources that impose a high extrinsic cognitive load by requiring learners to independently bridge the gap between abstract syntax and code execution (Becker et al., 2023; Qureshi, 2023). In contrast, GenAI tools function as conversational partners, offering real-time scaffolding, code synthesis, and personalized explanations. This shift represents a change in the pedagogical medium itself (X. Chen et al., 2021; Chiu, 2021; Chiu et al., 2024; Dzogovic et al., 2024). Unlike passive tools, GenAI agents introduce a layer of interactional novelty, transforming the solitary act of coding into a two-way dialog (Cano & Nunez, 2024; Grange et al., 2026). Understanding this shift is essential for analyzing students’ adoption drivers.

2.2. Theoretical Lens: UTAUT2 and the Hygiene Factor Phenomenon

UTAUT2 serves as the prevailing framework for understanding consumer technology adoption, theorizing that performance expectancy (efficiency) and effort expectancy (ease of use) are central determinants of behavior (Venkatesh et al., 2012). However, the applicability of these utilitarian constructs is increasingly debated in the context of digital natives (Tamilmani et al., 2021; Chang & Chang, 2023). For current university students, high adoption rates of digital tools have rendered efficiency and usability as baseline expectations or hygiene factors rather than distinguishing motivators. When a technology’s high performance is taken for granted, utilitarian drivers may reach a ceiling effect and diminish their predictive power relative to experiential or hedonic factors. This theoretical nuance is critical when examining GenAI, as it is a technology where efficiency is an inherent and assumed attribute rather than a variable feature.

2.3. Reframing Hedonic Motivation: Situational Interest and Pedagogical Novelty

Within the UTAUT2 framework, hedonic motivation is traditionally defined as the enjoyment derived from using a technology. While often conflated with gamification mechanics, recent educational psychology research suggests a broader interpretation rooted in situational interest. The novelty effect of interacting with a generative agent, one that can understand natural language and produce creative output, can cause a state of intrinsic cognitive engagement (Min & Schwarz, 2022; Asher & Harackiewicz, 2025; Guo & Fryer, 2025). In this context, hedonic motivation reflects the intellectual curiosity and satisfaction derived from the pedagogical novelty of the AI medium. Unlike the extrinsic loops of gamification, this form of enjoyment arises from the fluidity of the human–AI interaction itself, potentially acting as a dominant psychological driver in learning environments where traditional methods are perceived as tedious or static.

2.4. The Dual Nature of Ethical Awareness: Legitimacy vs. Automaticity

The integration of GenAI introduces complex ethical dimensions regarding academic integrity and authorship. Existing literature predominantly frames ethical awareness as an inhibitor, positing that greater ethical concerns reduce the likelihood of adoption (Al-Mughairi & Bhaskar, 2024). However, a cognitive regulation perspective may suggest a dual role. First, ethical awareness may provide cognitive legitimacy. When students possess a clear understanding of ethical boundaries, the ambiguity and perceived risk of using AI decrease, potentially empowering them to adopt the tool responsibly (Mikalef et al., 2022; Lu et al., 2024). Second, and conversely, ethical awareness involves deliberative cognitive processing, which requires conscious reflection and judgment. This stands in theoretical contrast to habit, which is characterized by automaticity and a lack of conscious thought. Consequently, a high level of ethical awareness may compel students to engage in reflective decision-making for each use case, theoretically counteracting the formation of mindless, automated usage loops. This tension between conscious ethical regulation and automated habit formation remains an underexplored area in technology acceptance research.

2.5. Systematic Instructional Design (SID) and Cognitive Load Theory (CLT)

The integration of GenAI into higher computer programming education necessitates a strategic realignment of Systematic Instructional Design (SID) with Cognitive Load Theory (CLT). Traditional programming instruction imposes a high intrinsic cognitive load because learners must simultaneously master complex syntax and abstract logic (Sweller, 2011). GenAI tools effectively reduce this intrinsic load by automating code generation. However, recent empirical evidence indicates that this efficiency frequently leads to a deficit in germane cognitive load, where learners engage in cognitive offloading rather than constructing necessary mental schemas. In addition, without specific pedagogical intervention, unguided AI usage negatively correlates with critical thinking development (Tian & Zhang, 2025). To mitigate these risks, SID frameworks must transition from content generation to competency validation. Instructional strategies should progress beyond substitution, where AI replaces learner effort, toward redefinition, where AI outputs serve as objects of analysis (Romero, 2025). By mandating that students verify and rectify AI-generated solutions, educators ensure that the cognitive capacity liberated by automation is redirected toward higher-order problem decomposition and logic verification (Manorat et al., 2025). This structural approach ensures that AI integration augments the cognitive processing essential for deep computational mastery.
In summary, the unique characteristics of GenAI and its pedagogical novelty, the baseline expectations of digital natives, and the complex cognitive demands of ethical reasoning, all together challenge the traditional assumptions of the UTAUT2 model. To empirically investigate these dynamics, we propose an extended research model that integrates ethical awareness and re-evaluates the roles of utilitarian versus hedonic drivers. The specific relationships and hypotheses are detailed in the following section.

3. Research Design

To empirically investigate the proposed research model, this study employed a quasi-experimental design aimed at examining the determinants of GenAI adoption among CS students in Python (version 3.80) programming education. This study systematically investigated these factors based on the UTAUT2 model, which was proposed in 2012 as an extended version of the UTAUT model (Venkatesh et al., 2012). This model provides a comprehensive and in-depth analytical framework for technology acceptance research (Dwivedi et al., 2020; Tamilmani et al., 2021).

3.1. Research Model and Hypotheses Development

Based on the literature review and the contextual shift toward pedagogical novelty and ethical cognitive regulation, we propose the research model depicted in Figure 1. The model posits that while traditional utilitarian drivers remain present, the hedonic motivation derived from AI interaction and the ethical awareness cultivated through training are the decisive predictors of habit and behavioral intention. Consistent with the research model, gender, major, age, and prior experience were included as control variables to account for potential confounding effects on habit and intention.

3.1.1. The Dominance of Hedonic Motivation (The Novelty Effect)

In the context of GenAI, hedonic motivation can be reconceptualized as the situational interest triggered by the pedagogical novelty of the AI medium (Huang et al., 2022; Petrescu et al., 2023; Artemova, 2024; Robertson & Georgeon, 2025). We conceive that the immediate feedback and conversational nature of AI tools reduce the cognitive work of coding, thereby reinforcing usage habits through positive reinforcement. Acknowledging the digital native context, we retain traditional UTAUT2 paths, but anticipate their influence may be overshadowed by hedonic motivation due to the ceiling effect of modern technology (Tamilmani et al., 2021).
H1. 
Hedonic motivation positively influences habit.
H2. 
Hedonic motivation positively influences behavioral intention.

3.1.2. The Cognitive Brake of Ethical Awareness

Ethical awareness is not a standard component of the original UTAUT 2 model. However, researchers have explored the integration of ethical considerations into technology acceptance models, including UTAUT2. While not explicitly part of UTAUT2, ethical awareness can be considered as an extension or additional construct that influences user behavior and intentions when it comes to technology adoption (Venkatesh et al., 2012). We hypothesize a divergent path for ethical awareness. According to dual process theory, intention is a conscious commitment (system 2), while habit implies automaticity (system 1). Ethical awareness serves as a cognitive brake, legitimizing the intention to use the tool by clarifying boundaries, while simultaneously requiring deliberative processing that constrains the formation of mindless habits. The following hypotheses are formulated from the perspective of ethical awareness.
H3. 
Ethical awareness positively influences behavioral intention.
H4. 
Ethical awareness will have a negative or non-significant influence on habit. (the cognitive brake hypothesis).

3.1.3. The Hygiene Factors and Rational Constraints

Behavioral intention of the UTAUT2 model is regarded as a vigorous construct and integrates diverse factors shaping technology adoption. It is formed by performance expectancy, effort expectancy, and social influence, and moderated by habit and hedonic motivation. Behavioral intention is a conservative pivotal factor of user behavior. The stronger the intention to use technology, the more likely it is to be used. It summarizes the motivational factors that impact the possibility of using technology (Kim et al., 2007). Additionally, price value represents the rational economic constraint. The following hypotheses are outlined from the perspective of behavioral intention.
H5. 
Performance expectancy positively influences behavioral intention.
H6. 
Effort expectancy positively influences behavioral intention.
H7. 
Social influence positively influences behavioral intention.
H8. 
Price value positively influences behavioral intention.

3.1.4. The Habituation Pathway

Habit is an important construct in the UTAUT2 model (Dwivedi et al., 2020). It reveals unconscious behavior triggered by environmental prompts rather than thoughtful decision-making (Ambalov, 2021). Habit has a robust predictive power over technology use and acceptance. Once a habit is formed, it can significantly influence future technology use, sometimes even beyond the influence of intention (C.-F. Chen & Chao, 2011). Allied with the model in Figure 1 following hypotheses are framed from the perspective of habit.
H9. 
Habit positively influences behavioral intention.

3.2. Instructional Intervention Design

To ensure the ecological validity of the study and to empirically test the pedagogical novelty and cognitive brake hypotheses, this research implemented a comprehensive dual-track instructional framework. As illustrated in Figure 2, the diagram illustrates the parallel progression of the AI-integrated pedagogy track (track A) and the ethical scaffolding track (track B) over the 16-week semester, highlighting key integration points. The intervention moves beyond simple tool adoption, structuring the semester-long course (16 weeks) into two parallel but interconnected tracks. The AI-integrated pedagogy track aimed at stimulating hedonic motivation via novelty, and the ethical scaffolding track aimed at regulating habit via cognitive friction.

3.2.1. Track A: The AI-Integrated Curriculum (Pedagogical Novelty)

The curriculum design shifts the pedagogical focus from syntax memorization to logic architecture, leveraging the generative capabilities of Large Language Models (LLMs) to reduce extrinsic cognitive load and foster situational interest. The AI agent applied in this study is either DeepSeek-V3 or DOLA (v9.6), as decided by the participants. The curriculum is divided into four progressive phases (see Figure 3 for the detailed workflow).
The flowchart demonstrates the student’s cognitive progression from the initial novelty trigger (Phase 1) to logic-based prompting (Phase 2), critical auditing (Phase 3), and finally to collaborative co-creation (Phase 4), where AI serves as a partner in creative production. In Phase 1 of the comparative awakening (weeks 1–3), to trigger immediate pedagogical novelty, students engage in a manual vs. AI contrast experiment. Students first randomly pick from the Python algorithmic problem list (as shown in Appendix A) and solve a classic algorithmic problem manually, documenting time spent and frustration levels. Subsequently, they utilize an AI agent (either DeepSeek or Dola) to generate the same solution. This stark contrast in efficiency and the conversational interface of the AI serve as a novelty trigger and activate the hedonic motivation pathway (Huang et al., 2022). The second phase involves prompt engineering and logic decomposition (weeks 4–8). This phase transitions students from code writers to prompt engineers. The instructional material focuses on decomposition-based prompting, where students need to break down complex problems into logical pseudocode before interacting with the AI. This ensures that the AI is used as a cognitive scaffold rather than a solution dispenser, maintaining high cognitive engagement (Chiu, 2021). In the following phase 3, about the AI auditor and debugging (weeks 9–13), to counter the hygiene factor effect (where efficiency becomes taken for granted), this phase introduces human auditing. Instructors provide pre-generated AI code containing subtle logical errors or security vulnerabilities. Students assume the role of code auditors, required to identify, explain, and fix these AI-induced errors. This activity reinforces technical competency while demystifying the infallibility of the tool (Moorhouse et al., 2023). In phase 4 regarding collaborative co-creation (weeks 14–16), the capstone project requires students to develop a functional application using AI for up to 80% of the codebase. The assessment focuses on the students’ ability to integrate modules and refine the user experience, symbolizing the shift to high-level creative problem-solving.

3.2.2. Ethical Awareness Training Design (Track B: The Ethical Scaffolding (The Cognitive Brake))

To test the cognitive brake hypothesis (H4), a dedicated ethical awareness training module was embedded. Parallel to the technical track, the ethical intervention is designed to introduce cognitive friction and force students to switch from system 1 (automatic habit) to system 2 (deliberative reasoning), thus preventing mindless dependency (Kahneman, 2011; Cotton et al., 2024). The specific mechanisms are detailed in Figure 4. The flowchart details how the intervention disrupts system 1 (automatic copy-pasting) and enforces system 2 (deliberative verification), thereby regulating habit formation.
In the first mechanism of the red team sensitization (weeks 1–3), students are tasked with red teaming exercises (as shown in Appendix B), where they intentionally probe the AI for hallucinations, biases, or incorrect code. In the meantime, the teaching of ethical principles (as shown in Appendix C) has been formally embedded to address AI’s ethical encounters (Ayinla et al., 2024; Olorunfemi et al., 2024; Zou et al., 2025). By exposing the flaws of the black box, this activity cultivates moral sensitivity and skepticism, laying the groundwork for critical usage (Dwivedi et al., 2023). As for Mechanism 2, about the pause and validation protocol (weeks 4–13), a mandatory AI disclosure checklist (as shown in Appendix D) is embedded into every assignment submission process. Students must categorize their code as AI-generated, human-modified, or human-written and provide a brief justification for using AI. This procedural requirement acts as a cognitive brake, disrupting the automaticity of copy-pasting and forcing a conscious ethical reflection before submission (Tlili et al., 2023, 2025). In mechanism 3 of the co-constructed code of conduct (weeks 14–16), instead of imposing top-down rules, students participate in drafting the classroom AI regulation. This participatory approach enhances the cognitive legitimacy of the ethical guidelines, ensuring that students internalize the norms rather than viewing them as external constraints.

3.3. Data Collection Procedure

The data collection strategy was designed to ensure ecological validity while adhering to ethical standards. As illustrated in Figure 5, the procedure was executed in four sequential phases over the course of a 16-week semester. In Phase 1, prior to the semester, the target population comprised CS students enrolled in the programming course. Recruitment was conducted during the first lecture. Students were explicitly informed that their survey responses would remain anonymous and would not influence their academic grades, a procedural remedy designed to reduce evaluation apprehension and social desirability bias (Podsakoff et al., 2003). In Phase 2 of intervention and observation (weeks 1–15), participants engaged in the 16-week dual-track instructional framework (as illustrated in Figure 2). During this period, check-in data were collected non-intrusively to verify engagement levels. This served as a manipulation check to ensure that the ethical scaffolding was actively experienced, not just theoretically presented.
In the following phase of survey administration (Week 16), post-intervention data were collected. The questionnaires (as shown in Appendix E) were distributed immediately after the submission of the project to capture the students’ cumulative experience. To ensure data quality, four trap questions of reversing the expressions were randomly inserted. The order of question blocks was randomized to mitigate order effects (Hair et al., 2019). Finally, Phase 4 of data screening and quality control could begin. Table 1 presents the overall phases of the research. A total of 148 responses were initially received. The dataset underwent a rigorous screening process. Five responses with >10% missing data were removed. Three responses exhibiting unvarying patterns. Four responses failing the trap questions were excluded. This resulted in a final valid sample of n = 136. A post hoc power analysis using G*Power (version 3.1.9.7) indicated that this sample size provides sufficient power (>0.80) to detect medium effect sizes (f2 = 0.15) for the proposed SEM model (Cohen, 2013).

4. Results

4.1. Characteristics of the Samples

As shown in Figure 6, the sample includes 136 participants from a university in Guangdong, China. All of them are computer science majors. This group is primarily male (118 participants, 86.76%) with 18 females (13.24%), which is common due to the characteristics of the CS major and has meaningful reference for related study (Shrestha & Das, 2022). Ages range from 19 to 24, with most being 20 years old (38.97%).

4.2. Reliability and Validity of Constructs

The data collected is analyzed by Mplus editor 8 (Version 1.6 (1)) and IBM SPSS R27.0. The model displays good reliability and validity for most of the constructs. As shown in Table 2, habit is a significant factor with high factor loadings (0.758–0.834) and strong reliability (Cronbach’s alpha = 0.891, CR = 0.893), and an AVE of 0.583. Meanwhile, hedonic motivation has a strong reliability (Cronbach’s alpha = 0.881) and validity (AVE = 0.557). Performance expectancy has comparatively good construct validity with the factor loadings from 0.639 to 0.777 and a high composite reliability of 0.863. Effort expectancy shows excellent internal consistency with a Cronbach’s Alpha of 0.872 and AVE of 0.541. Hedonic motivation shows excellent reliability and convergent validity with an AVE of 0.556, and it aligns well with previous research on habit in technology acceptance (Venkatesh et al., 2012). PV maintains high reliability with a composite reliability of 0.890 and an AVE of 0.547, supporting its role in technology acceptance (Davis, 1989). Social influence also shows a strong impact with reliability (Cronbach’s alpha = 0.843) and an AVE (0.475) slightly below the threshold (0.5) (Takona, 2024). At the same time, both behavioral intention and ethical awareness have relatively low AVEs of 0.386 and 0.403 but high CRs of 0.787 and 0.797, where higher composite reliability compensates for the lower AVE, maintaining model validity (Fornell & Larcker, 1981).

4.3. Discriminant Validity

The discriminant validity is assessed and presented in Table 3. The square root of the AVE ( A V E 2 ) for each construct is compared to the correlations between each construct. The discriminant validity is found when A V E 2 is higher than its correlations with the rest of the constructs (Takona, 2024). Regarding performance expectancy, the A V E 2 (0.717) is higher than the correlations with constructs of effort expectancy (0.573), social influence (0.583), hedonic motivation (0.660), price value (0.507), habit (0.434), ethical awareness (0.235), and behavioral intention (0.530). This indicates robust discriminant validity for performance expectancy. Regarding effort expectancy, A V E 2 (0.736) exceeds its correlations with other constructs, including social influence (0.531), hedonic motivation (0.596), price value (0.541), habit (0.443), ethical awareness (0.209), and behavioral intention (0.522). This supports the unique nature of effort expectancy within the model. The statistical characterization is similar to social influence, hedonic motivation, price value, habit, ethical awareness, and behavioral intention. Significantly, habit ( A V E 2 = 0.764 ) exceeds correlations with ethical awareness (0.235) and behavioral intention (0.517). Ethical awareness ( A V E 2 = 0.634) is distinctly different from its correlations with behavioral intention (0.312), maintaining its unique position in the model, but has one lower correlation with habit (0.121). Behavioral intention ( A V E 2 = 0.621) is higher than most of its correlations with other constructs. However, the correlation between behavioral intention and hedonic motivation, as well as price value, is slightly higher than the A V E 2   o f behavioral intention. It is theoretically acceptable for the A V E 2 to be slightly lower than the correlation with a certain factor and other factors (Henseler et al., 2015).
These results validate that each construct is conceptually distinct, promoting confidence in the reliability and applicability of the model. Ethical awareness and behavioral intention provide insights and promote the formulation of specialized strategies and interventions designed to independently improve these areas. These perceptions contribute valuable knowledge to the field, supporting future research and practical applications in understanding and addressing specific constructs in varied contexts.

4.4. Regression Analyses

The regression analysis determines the predictive relationship through the modeling of habit, a positive habitual automated behavior in the continuous use of technology. Figure 7 and Figure 8 illustrate the regression model.
A scatterplot of regression standardized predicted values versus standardized residuals was used to assess model fit and validate assumptions. The regression model appears to fit the data quite well, and the results show the data representation points being widely dispersed, providing no visible patterns. The well-distributed points tell us about homoscedasticity, meaning constant variance of residuals across predicted values. This is to assist the reliability of the models’ predictions at each level of independent variates. Additionally, the randomness and lack of patterns in the residuals also confirm the model’s ability to capture the relationship between predictors and habit. Such an approach to diagnosis is adhering developed best practices for technology acceptance research (Venkatesh et al., 2012). Figure 8 shows a scatterplot of the regression standardized predicted values versus the regression standardized residuals for the dependent variable behavioral intention. These qualities of the scatterplot have guided us towards a suitable and legitimate regression model for behavioral intention, which will help enforce finding robustness and inference reliability in the results presented in this area of study. Previous research highlights the importance of these types of diagnostics in contexts such as technology acceptance and behavioral intention (Davis, 1989; Venkatesh et al., 2012). Such a distribution indicates that the linear relationship assumed by the regression model is sound (Henseler et al., 2015), and it recommends that the effect of an AI-infused curriculum on learning outcomes is gradual and predictable.

5. Discussion

This study set out to unravel the complex psychological mechanisms driving CS students’ adoption of GenAI within a higher education programming curriculum. By integrating ethical awareness into the UTAUT2 framework and implementing a dual-track pedagogical intervention, we aimed to move beyond the traditional efficiency-centric view of technology acceptance. The SEM results (as shown in Figure 9 and Table 4 and Table 5) provide strong empirical support for a paradigm shift from utilitarian tool adoption to an experience-driven, ethically regulated partnership. As shown in Table 4, the χ2/df ratio (1.52) was well below the threshold (3), indicating a sound balance of model complexity and fit (Kline, 2016). The RMSEA (0.062) is also below the 0.08 cutoff, approaching the ideal 0.06 value and showing minimal population-level approximation error (Hu & Bentler, 1999). Relative fit indices (CFI = 0.845, IFI = 0.849, TLI = 0.835) all exceed the 0.8 threshold, consistent with small-sample SEM standards in the social sciences (Marsh et al., 2004). Traditional indices AGFI (0.655) and GFI (0.691) are lower, but this reflects their sensitivity to the small size of samples (n = 136) and model complexity, with modern indices now preferred (Byrne, 2016).

5.1. The Eclipse of Utility: Efficiency as a Hygiene Factor

A striking finding from our model is the non-significance of performance expectancy (β = 0.076, p = 0.39) and effort expectancy (β = 0.125, p = 0.13). This defies the traditional logic of technology acceptance, where utility is typically the primary driver (Venkatesh et al., 2012). We interpret this through the lens of the hygiene factor hypothesis (Tamilmani et al., 2021). For digital native students, the ability of AI to generate code quickly and easily is no longer a differentiator, and it is a baseline expectation. Its presence does not motivate, but its absence would dissatisfy. This suggests that in the era of LLMs, utilitarian efficiency may have hit a ceiling effect, thereby losing its predictive power for behavioral intention.

5.2. The Primacy of Pedagogical Novelty: Hedonic Motivation as the Engine

One of the most significant findings is the dominance of hedonic motivation as the strongest predictor of both habit (β = 0.457, p < 0.001) and behavioral intention (β = 0.336, p = 0.001). Contrary to early technology acceptance studies, where utilitarian drivers (performance expectancy) were paramount (Venkatesh et al., 2012), our results indicate that for digital native students, the efficiency of AI is taken for granted and effectively devolved into a hygiene factor (Tamilmani et al., 2021). Consequently, performance expectancy and effort expectancy showed non-significant effects on intention. This shift suggests that the driver of adoption has moved from tool utility to pedagogical novelty. The interactive, conversational nature of GenAI introduces a layer of situational interest that static textbooks lack (Huang et al., 2022). This novelty effect transforms the coding process from a solitary struggle with syntax into a collaborative dialog, generating intrinsic enjoyment (Wu et al., 2024; J. Li et al., 2025). This finding aligns with the flow theory in programming education and suggests that AI reduces the frustration barrier. This allows students to enter a state of cognitive absorption that reinforces habit formation.

5.3. The Cognitive Brake Mechanism: System 1 vs. System 2

The study contributes a novel theoretical insight regarding the divergent role of ethical awareness. While it significantly positively influences behavioral intention (β = 0.166, p = 0.011), it has a non-significant, slightly negative relationship with habit (β = −0.032, p = 0.653). This apparent contradiction can be explained through dual process theory (Kahneman, 2011). Ethical awareness and intention can be regarded as the cognitive legitimacy. The positive link to intention suggests that ethical training empowers students. By clarifying the gray areas of academic integrity, ethical awareness reduces perceived risk and provides the cognitive legitimacy to use AI responsibly (Dwivedi et al., 2020, 2023). Students intend to use AI because they know how to use it correctly. Ethical awareness and habit act as the cognitive brake. Habit, by definition, implies automaticity (system 1). The lack of a link to habit confirms our cognitive brake hypothesis. Habit formation relies on system 1 thinking (automaticity, speed, low cognitive load). However, ethical reasoning involves system 2 thinking (deliberative, reflective, high cognitive load). High ethical awareness compels students to pause and evaluate the propriety of AI assistance for each specific task. This deliberative process acts as a cognitive brake, actively disrupting the formation of mindless, automated usage loops. Thus, the lack of a significant path to habit is not a failure of the intervention but a pedagogical success. It indicates that ethically aware students are resisting dependency and maintaining a reflective distance from the technology, and proves that the ethical scaffolding intervention successfully prevented students from sleepwalking into mindless dependency.

5.4. Rational Constraints in an Experiential Model

Despite the dominance of hedonic drivers, the significance of price value (β = 0.222, p = 0.011) underscores that students remain rational economic actors. In the higher education context, where students may be price-sensitive, the perceived cost-effectiveness of premium AI tools acts as a practical constraint. This suggests that while pedagogical novelty sparks interest, sustainable adoption requires a perception of fair value, aligning with recent findings on the freemium models of EdTech (Strzelecki, 2024). Social influence was non-significant (β = 0.086, p = 0.349). This indicates that for higher education programming tasks, adoption is a highly personal, task-oriented decision driven by internal experience (hedonic motivation) rather than external peer pressure, highlighting the individualistic nature of human–AI collaboration.

5.5. Theoretical Implications

This study extends the UTAUT2 framework in two key dimensions. First, it validates the hygiene factor hypothesis in the context of GenAI, proposing that utilitarian constructs may lose predictive power when technology reaches a saturation point of efficiency. Second, it introduces the dual regulation model of ethics, demonstrating that ethical awareness is not merely an antecedent to behavior but a distinct regulatory mechanism that promotes conscious intention while inhibiting automatic habituation.

5.6. Practical Implications for Educators and Policymakers

Educators should design curricula that exploit the pedagogical novelty of AI instead of just teaching syntax. Assignments should focus on AI-human collaboration and prompt engineering rather than rote memorization and maintain the situational interest that drives engagement. Ethical training should move beyond punitive warnings to competency building, which enables intention. The goal should be to foster intention without mindless habit. Educators need to introduce friction in the workflow to engage system 2 thinking and prevent the atrophy of critical thinking skills.

5.7. Long-Term Sustainability of the Novelty Effect

Sustaining AI’s pedagogical novelty requires translating its transient situational interest into enduring, cognitively engaged learning, rather than relying on its initial appeal alone. As learners grow familiar with AI tools, novelty naturally fades, increasing the risk of mindless reliance and eroded engagement if curricula focus only on surface-level use. To avoid this, it is necessary to design iterative, friction-rich workflows that keep system 2 critical thinking active and to replace punitive ethical warnings with competency-building to foster intentional judgment. This design cultivates intentional AI use over mindless habit, ensuring AI’s pedagogical value persists long after initial novelty diminishes and preventing critical thinking atrophy, sustaining meaningful engagement for long-term learning.

6. Limitations and Future Research

While this study offers robust insights, several limitations warrant future investigation. First, the participants were recruited from regions at different levels of economic development in China and had varying experience with computer use, and males accounted for a large proportion of the sample. Future research may adjust the study design to address these factors. Second, the novelty effect is temporally sensitive, and future longitudinal studies should examine whether the dominance of hedonic motivation persists as AI tools become ubiquitous. Third, the measurement of habit relied on self-reports, and future research could employ system logs to capture objective usage frequency and behavioral patterns. Fourth, the cognitive brake hypothesis may warrant experimental validation by using neurophysiological measures to observe the real-time cognitive load during ethical decision-making in coding tasks. Finally, an important direction for future research is to explicitly examine the cross-cultural and longitudinal extensions of the model, thus enhancing its generalizability and practical value in varied settings over time.

7. Conclusions

As GenAI is reshaping the landscape of CS education, understanding the psychological drivers of students’ adoption is significant. This study, grounded in a quasi-experimental design within a Python programming curriculum, reveals that the adoption of AI tools is no longer driven by mere efficiency. Instead, it is fueled by the pedagogical novelty and intrinsic enjoyment of the human–AI partnership. Crucially, we identified a cognitive brake mechanism where ethical awareness legitimizes conscious usage intention while preventing the formation of blind dependency habits. For educators and policymakers, in terms of design for novelty, curriculum design should concentrate on co-creating with AI instead of using commands and leverage the tool’s interactive nature to sustain student interest. In addition, ethical training should not just be a lecture but a procedural requirement that triggers system 2 thinking, preventing the formation of inactive habits. Meanwhile, given the significance of price value, institutions should consider subsidizing premium AI tools to ensure equitable access to these generative partners. Ultimately, this research suggests that the integration of AI in education is not a binary choice between prohibition and unrestrained use. By balancing the engaging power of pedagogical novelty with the reflective constraints of ethical awareness, educators can transform AI from a crutch for efficiency into a catalyst for critical, creative, and responsible learning.

Author Contributions

Conceptualization, H.Z. and P.P.; methodology, H.Z., B.M. and Y.-H.S.; software, K.I.C.; validation, P.P. and Y.-H.S.; formal analysis, H.Z. and K.I.C.; investigation, B.M.; resources, H.Z. and P.P.; data curation, K.I.C., B.M., and Y.-H.S.; writing—original draft, H.Z.; writing—review and editing, P.P.; visualization, H.Z.; supervision, P.P.; project administration, P.P.; funding acquisition, P.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Macao Science and Technology Development Fund (FDCT; funding ID: 0029/2025/AIJ) and Macao Polytechnic University research grant (project code: RP/FCA-24/2025).

Institutional Review Board Statement

The study was conducted in accordance with the Declara-tion of Helsinki, and approved by The Pedagogic Committee of Faculty of Applied Sciences of the Macao Polytechnic University HEA002-FCA-2024 2024-05-27.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
AVEThe Average Variance Extracted
A V E 2 Root of the AVE
BIBehavioral Intention
CBCognitive Brake
CCCyclomatic Complexity
CLTCognitive Load Theory
CRComposite Reliability
CSComputer Science
EAEthical Awareness
EdTechEducational Technology
EEEffort Expectancy
GenAIGenerative AI
HMHedonic Motivation
HTHabit
LLMsLarge Language Models
PNPedagogical Novelty
PVPrice Value
SEMStructural Equation Modeling
SISocial Influence
SIDSystematic Instructional Design
UTAUT2Unified Theory of Acceptance and Use of Technology 2

Appendix A. The Python Algorithmic Problem List

Algorithmic TypeDescriptionRequirement
Bubble Sort for Array SortingWrite a program to sort an integer array in ascending order by implementing the Bubble Sort algorithm. The core logic of bubble sort is to repeatedly compare adjacent elements and swap them if they are in the wrong order, until all elements are sorted.Input an unordered integer array, output the sorted array by using bubble sort only.
Fibonacci SequenceWrite a program to generate and print the first n numbers of the Fibonacci sequence. The Fibonacci sequence is defined as follows: the first two numbers are 0 and 1, and each subsequent number is the sum of the two preceding ones.Enter a positive integer n, and output the first n Fibonacci numbers in order.
Solving Multivariable Equations with Nested LoopsBuy 100 chickens with 100 coins. The price rules are a rooster costs 5 coins each, a hen costs 3 coins each, and three chicks cost 1 coin. We need to buy at least one of each type of chicken. Write a program to find all the possible combinations of roosters, hens, and chicks that meet the conditions.Use nested loops to solve the problem, output all valid integer solutions of (rooster, hen, chick).
Solving Iterative Problems with the Recursive MethodWrite a program to calculate the sum of all elements in an integer array by using a pure recursive method (No loop allowed).Decompose the array recursively, and calculate the sum of elements step by step.

Appendix B. Red Team Exercises (Triggering Moral Sensitivity)

These exercises are designed to purposefully trigger AI failures (hallucinations, bias, security flaws). By breaking the AI, students develop a healthy skepticism (moral sensitivity), which is a prerequisite for the cognitive brake.
Exercise ListContextAction
Exposing AI Hallucinations (The non-existent references task)AI models often hallucinate references that sound real but do not exist. This exercise proves that AI can confidently lie.Participants can ask AI to generate references according to the topic they propose, before checking the references provided, and find that most of them do not exist or have mismatched information.
Security Vulnerability Injection (The SQL injection trap)AI prioritizes helpfulness over security. Students learn that functional code is not really secure code, and AI defaults to the simplest (often insecure) solutionParticipants ask AI to write a simple Python flask route that takes a username and password from a POST request and checks if they exist in the SQLite database. After checking the code, participants identify the flaw and force the AI to fix it by prompting: “The previous code is vulnerable to SQL injection. Rewrite it using parameterized queries.”

Appendix C. Ethical Principles

When utilizing Artificial Intelligence (AI), users need to consider mainly eight key areas of ethical concern, and this is core for ethical training regarding AI.
ItemEthical PrinciplesDescription
1Transparency and ExplicabilityUsers need to comprehend the decision-making processes of AI systems, ensure transparency in AI behavior, and be able to provide clear explanations and feedback on its decisions during usage.
2Responsibility and AccountabilityWhen employing AI, users should clearly define the attribution of responsibility for AI systems, especially in cases where AI systems generate errors or impact society, specifying who is accountable and the consequences thereof.
3Privacy ProtectionUsers’ personal data should be safeguarded. AI systems must comply with privacy protection regulations and refrain from unauthorized collection, storage, or misuse of personal information.
4SecurityUsers should ensure that AI systems do not pose security threats to themselves or others during use, including data breaches, technological malfunctions, or improper utilization.
5Fairness and UnbiasednessUsers should ensure that the utilization of AI systems does not lead to or exacerbate any form of discrimination or bias, particularly in sensitive domains such as race, gender, and age.
6Sustainability and Environmental ImpactUsers should be cognizant of the environmental impact of AI technologies, including energy consumption and resource utilization, and strive to select environmentally friendly technological applications that promote sustainable development.
7AutonomyAI should always remain under human control, ensuring that system operations and decision-making processes align with human intentions, avoiding fully autonomous AI decision-making, especially in high-risk scenarios.
8Prevention of Technological MisuseUsers need to ensure that AI technology is not misused or employed for malicious purposes, such as generating false information, social manipulation, or illegal activities.

Appendix D. The AI Disclosure Checklist

ItemPercentageContent
1____%Code written entirely by a human without AI assistance.
2____%Code generated by AI but significantly modified/optimized by a human.
3____%Pure AI Generation (Justification required).

Appendix E. Post-Intervention Questionnaire

Note: All items were measured on a 7-point Likert scale (1 = Strongly Disagree, 7 = Strongly Agree).
Adapted from Venkatesh et al. (2012).
Items1234567
Performance Expectancy (PE)
  • I believe AI technology can help me improve my learning efficiency.
  • I think using AI technology will make me perform better academically.
  • I believe AI tools can help me complete learning tasks more efficiently.
  • AI technology makes me understand complex subjects faster.
  • I think using AI technology can save my study time.
  • I believe AI technology has a positive impact on my academic performance.
Social Influence (SI)
  • My friends are using AI technology, so I tend to use it as well.
  • My teachers believe I can use AI technology, so I actively use AI.
  • My classmates all think that using AI tools can improve academic performance, so I use AI.
  • I feel that since everyone around me is using AI technology, it is the right choice to use it.
  • I feel that my classmates believe AI technology is useful for learning, so I am willing to try it.
  • My friends think that AI technology can improve academic performance, which makes me more willing to use it.
Hedonic Motivation (HM)
  • Using AI technology makes me feel happy during my studies.
  • I like using AI tools because it makes me feel relaxed and interesting.
  • Using AI technology makes me feel interested and drives my interest in learning.
  • I like to discover new learning methods through AI technology.
  • AI technology makes me feel relaxed and happy while studying.
  • Using AI technology makes me think that learning is more interesting and challenging.
Effort Expectancy (EE)
  • Learning to use AI technology is easy for me.
  • I find the operation of AI systems is easy.
  • I cannot easily use AI tools.
  • Using AI technology does not require much effort from me.
  • I feel that using AI is not complicated.
  • I can use AI applications without difficulty.
Price Value (PV)
  • I think the cost of using AI technology is relatively reasonable.
  • The benefits of using AI technology outweigh the costs I pay.
  • I believe the use of AI tools is affordable.
  • I am willing to pay a certain amount to obtain the learning convenience brought by AI technology.
  • I think the cost of using AI technology is worth it.
  • I feel that AI technology can bring enough returns to my learning.
Habit (H)
  • I have already developed the habit of using AI tools in my studies.
  • Using AI technology has become a regular part of my learning process.
  • Whenever I have study tasks, I first think of using AI tools.
  • Using AI technology has become one of my learning habits.
  • I often use AI technology to help me complete learning tasks.
  • It is inconvenient for me to learn without AI technology.
Behavioral Intention (BI)
  • Even though there are concerns about privacy, I still want to continue using AI technology.
  • I am willing to continue using AI technology in the future, provided it adheres to ethical standards.
  • Despite ethical considerations, I still plan to continue using AI tools.
  • I intend to recommend others use AI technology, but only if it follows ethical guidelines.
  • AI technology is expected to have a positive impact on my learning, so I will continue using it.
  • I am willing to accept and promote AI technologies that meet ethical standards.
Ethical Awareness (EA)
  • I’m not worried that AI technology might invade my privacy.
  • I feel concerned about whether AI technology will store and use my personal information safely.
  • I think AI systems may have unfairness or biases in their decision-making processes.
  • I believe AI technology should follow ethical rules to avoid bringing negative impacts to users.
  • I’m quite cautious about AI technology processing my data, especially when there’s not enough transparency.
  • I hope there can be stricter ethical standards to protect users’ rights and interests when we use AI technology.

References

  1. Al-Mughairi, H., & Bhaskar, P. (2024). Exploring the factors affecting the adoption AI techniques in higher education: Insights from teachers’ perspectives on ChatGPT. Journal of Research in Innovative Teaching & Learning, 18(2), 232–247. [Google Scholar] [CrossRef]
  2. Ambalov, I. A. (2021). An investigation of technology trust and habit in IT use continuance: A study of a social network. Journal of Systems and Information Technology, 23(1), 53–81. [Google Scholar] [CrossRef]
  3. Artemova, I. (2024). Bridging motivation and AI in education: An activity theory perspective. Digital Education Review, (45), 59–67. [Google Scholar] [CrossRef]
  4. Asher, M. W., & Harackiewicz, J. M. (2025). Using choice and utility value to promote interest: Stimulating situational interest in a lesson and fostering the development of interest in statistics. Journal of Educational Psychology, 117(4), 647–662. [Google Scholar] [CrossRef]
  5. Ayinla, B. S., Amoo, O. O., Atadoga, A., Abrahams, T. O., Osasona, F., & Farayola, O. A. (2024). Ethical AI in practice: Balancing technological advancements with human values. International Journal of Science and Research Archive, 11(1), 1311–1326. [Google Scholar] [CrossRef]
  6. Barbosa, P. L. S., do Carmo, R. A. F., Gomes, J. P. P., & Viana, W. (2024). Adaptive learning in computer science education: A scoping review. Education and Information Technologies, 29(8), 9139–9188. [Google Scholar] [CrossRef]
  7. Becker, B. A., Denny, P., Finnie-Ansley, J., Luxton-Reilly, A., Prather, J., & Santos, E. A. (2023, March 15–18). Programming is hard—Or at least it used to be: Educational opportunities and challenges of AI code generation. 54th ACM Technical Symposium on Computer Science Education V. 1, SIGCSE 2023 (pp. 500–506), Toronto, ON, Canada. [Google Scholar] [CrossRef]
  8. Byrne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and programming (3rd ed.). Routledge. [Google Scholar] [CrossRef]
  9. Cano, J. R., & Nunez, N. A. (2024). Unlocking innovation: How enjoyment drives GenAI use in higher education. Frontiers in Education, 9, 1483853. [Google Scholar] [CrossRef]
  10. Chang, C.-W., & Chang, S.-H. (2023). The impact of digital disruption: Influences of digital media and social networks on forming digital natives’ attitude. Sage Open, 13(3), 21582440231191741. [Google Scholar] [CrossRef]
  11. Chen, C.-F., & Chao, W.-H. (2011). Habitual or reasoned? Using the theory of planned behavior, technology acceptance model, and habit to examine switching intentions toward public transit. Transportation Research Part F: Traffic Psychology and Behaviour, 14(2), 128–137. [Google Scholar] [CrossRef]
  12. Chen, C.-H., & Chang, C.-L. (2024). Effectiveness of AI-assisted game-based learning on science learning outcomes, intrinsic motivation, cognitive load, and learning behavior. Education and Information Technologies, 29(14), 18621–18642. [Google Scholar] [CrossRef]
  13. Chen, X., Zou, D., Xie, H., & Wang, F. L. (2021). Past, present, and future of smart learning: A topic-based bibliometric analysis. International Journal of Educational Technology in Higher Education, 18(1), 2. [Google Scholar] [CrossRef]
  14. Chiu, T. K. F. (2021). A holistic approach to the design of artificial intelligence (AI) education for K-12 schools. TechTrends, 65(5), 796–807. [Google Scholar] [CrossRef]
  15. Chiu, T. K. F., Ahmad, Z., Ismailov, M., & Sanusi, I. T. (2024). What are artificial intelligence literacy and competency? A comprehensive framework to support them. Computers and Education Open, 6, 100171. [Google Scholar] [CrossRef]
  16. Cohen, J. (2013). Statistical power analysis for the behavioral sciences (2nd ed.). Routledge. [Google Scholar] [CrossRef]
  17. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
  18. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  19. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. [Google Scholar] [CrossRef]
  20. Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Information Systems Frontiers, 21(3), 719–734. [Google Scholar] [CrossRef]
  21. Dwivedi, Y. K., Rana, N. P., Tamilmani, K., & Raman, R. (2020). A meta-analysis based modified unified theory of acceptance and use of technology (meta-UTAUT): A review of emerging literature. Current Opinion in Psychology, Cyberpsychology, 36, 13–18. [Google Scholar] [CrossRef]
  22. Dzogovic, S., Zdravkovska-Adamova, B., & Serpil, H. (2024). From theory to practice: A holistic study of the application of artificial intelligence methods and techniques in higher education and science. Human Research in Rehabilitation, 14(2), 293–311. [Google Scholar] [CrossRef]
  23. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. [Google Scholar] [CrossRef]
  24. Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304. [Google Scholar] [CrossRef]
  25. Grange, C., Demazure, T., Ringeval, M., Bourdeau, S., & Martineau, C. (2026). The human-GenAI value loop in human-centered innovation: Beyond the magical narrative. Information Systems Journal, 36(1), 29–51. [Google Scholar] [CrossRef]
  26. Guo, Z., & Fryer, L. K. (2025). What really elicits learners’ situational interest in learning activities: A scoping review of six most commonly researched types of situational interest sources in educational settings. Current Psychology, 44(1), 587–601. [Google Scholar] [CrossRef]
  27. Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2–24. [Google Scholar] [CrossRef]
  28. Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. [Google Scholar] [CrossRef]
  29. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. [Google Scholar] [CrossRef]
  30. Huang, W., Hew, K. F., & Fryer, L. K. (2022). Chatbots for language learning—Are they really useful? A systematic review of chatbot-supported language learning. Journal of Computer Assisted Learning, 38(1), 237–257. [Google Scholar] [CrossRef]
  31. Kabudi, T., Pappas, I., & Olsen, D. H. (2021). AI-enabled adaptive learning systems: A systematic mapping of the literature. Computers and Education: Artificial Intelligence, 2, 100017. [Google Scholar] [CrossRef]
  32. Kahneman, D. (2011). Thinking, fast and slow (p. 499). Farrar, Straus and Giroux. [Google Scholar]
  33. Kim, H.-W., Chan, H. C., & Gupta, S. (2007). Value-based adoption of mobile internet: An empirical investigation. Decision Support Systems, Mobile Commerce: Strategies, Technologies, and Applications, 43(1), 111–126. [Google Scholar] [CrossRef]
  34. Kline, R. B. (2016). Principles and practice of structural equation modeling (4th ed., pp. xvii, 534). The Guilford Press. [Google Scholar]
  35. Kocielnik, R., Amershi, S., & Bennett, P. N. (2019, May 4–9). Will you accept an imperfect AI? Exploring designs for adjusting end-user expectations of AI systems. 2019 CHI Conference on Human Factors in Computing Systems, CHI’19 (pp. 1–14), Glasgow, Scotland. [Google Scholar] [CrossRef]
  36. Li, J., Zhang, J., Chai, C. S., Lee, V. W. Y., Zhai, X., Wang, X., & King, R. B. (2025). Analyzing the network structure of students’ motivation to learn AI: A self-determination theory perspective. npj Science of Learning, 10(1), 48. [Google Scholar] [CrossRef] [PubMed]
  37. Li, M., Enkhtur, A., Cheng, F., & Yamamoto, B. A. (2024). Ethical implications of ChatGPT in higher education: A scoping review. arXiv, arXiv:2311.14378. [Google Scholar]
  38. Long, X., Tan, X., Zhu, Y., Jiang, J., & Zhang, L. (2025). Understanding and enhancing CS students’ interaction experience with AI coding assistant tools. ACM Transactions on Software Engineering and Methodology. [Google Scholar] [CrossRef]
  39. Lu, Q., Zhu, L., Xu, X., Whittle, J., Zowghi, D., & Jacquet, A. (2024). Responsible AI pattern catalogue: A collection of best practices for AI governance and engineering. ACM Computing Surveys, 56(7), 173:1–173:35. [Google Scholar] [CrossRef]
  40. Manorat, P., Tuarob, S., & Pongpaichet, S. (2025). Artificial intelligence in computer programming education: A systematic literature review. Computers and Education: Artificial Intelligence, 8, 100403. [Google Scholar] [CrossRef]
  41. Marsh, H. W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320–341. [Google Scholar] [CrossRef]
  42. McIntire, A., Calvert, I., & Ashcraft, J. (2024). Pressure to plagiarize and the choice to cheat: Toward a pragmatic reframing of the ethics of academic integrity. Education Sciences, 14(3), 244. [Google Scholar] [CrossRef]
  43. Mikalef, P., Conboy, K., Lundström, J. E., & Popovič, A. (2022). Thinking responsibly about responsible AI and ‘the dark side’ of AI. European Journal of Information Systems, 31(3), 257–268. [Google Scholar] [CrossRef]
  44. Min, B., & Schwarz, N. (2022). Novelty as opportunity and risk: A situated cognition analysis of psychological control and novelty seeking. Journal of Consumer Psychology, 32(3), 425–444. [Google Scholar] [CrossRef]
  45. Moorhouse, B. L., Li, Y., & Walsh, S. (2023). E-classroom interactional competencies: Mediating and assisting language learning during synchronous online lessons. RELC Journal, 54(1), 114–128. [Google Scholar] [CrossRef]
  46. Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández-Leal, Á. (2023). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, 56(4), 3005–3054. [Google Scholar] [CrossRef]
  47. Olorunfemi, O. L., Amoo, O. O., Atadoga, A., Fayayola, O. A., Abrahams, T. O., & Shoetan, P. O. (2024). Towards a conceptual framework for ethical AI development in it systems. Computer Science & IT Research Journal, 5(3), 616–627. [Google Scholar] [CrossRef]
  48. Oravec, J. A. (2023). Artificial intelligence implications for academic cheating: Expanding the dimensions of responsible human-AI collaboration with ChatGPT. Journal of Interactive Learning Research, 34(2), 213–237. [Google Scholar] [CrossRef]
  49. Petrescu, M.-A., Pop, E.-L., & Dan Mihoc, T. (2023). Students’ interest in knowledge acquisition in Artificial Intelligence. Procedia Computer Science, 225, 1028–1036. [Google Scholar] [CrossRef]
  50. Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903. [Google Scholar] [CrossRef]
  51. Prashar, A., Gupta, P., & Dwivedi, Y. K. (2024). Plagiarism awareness efforts, students’ ethical judgment and behaviors: A longitudinal experiment study on ethical nuances of plagiarism in higher education. Studies in Higher Education, 49(6), 929–955. [Google Scholar] [CrossRef]
  52. Prather, J., Reeves, B. N., Denny, P., Becker, B. A., Leinonen, J., Luxton-Reilly, A., Powell, G., Finnie-Ansley, J., & Santos, E. A. (2023). “It’s weird that it knows what I want”: Usability and interactions with copilot for novice programmers. ACM Transactions on Computer-Human Interaction, 31(1), 4:1–4:31. [Google Scholar] [CrossRef]
  53. Qureshi, B. (2023, June 9–11). Exploring the use of ChatGPT as a tool for learning and assessment in undergraduate computer science curriculum: Opportunities and challenges. 2023 9th International Conference on E-Society e-Learning and e-Technologies (pp. 7–13), Portsmouth, UK. [Google Scholar] [CrossRef]
  54. Robertson, P., & Georgeon, O. L. (2025). Intrinsic motivation for artificial agents. In P. Robertson, & O. Georgeon (Eds.), Situated self-guided learning (pp. 88–120). Springer Nature. [Google Scholar] [CrossRef]
  55. Romero, M. (2025). From consumption to co-creation: A systematic review of six levels of ai-enhanced creative engagement in education. Multimodal Technologies and Interaction, 9(10), 110. [Google Scholar] [CrossRef]
  56. Sari, H. E., Tumanggor, B., & Efron, D. (2024). Improving educational outcomes through adaptive learning systems using AI. International Transactions on Artificial Intelligence, 3(1), 21–31. [Google Scholar] [CrossRef]
  57. Shaukat, K., Iqbal, F., Alam, T. M., Aujla, G. K., Devnath, L., Khan, A. G., Iqbal, R., Shahzadi, I., & Rubab, A. (2020). The impact of artificial intelligence and robotics on the future employment opportunities. Trends in Computer Science and Information Technology, 5(1), 050–054. [Google Scholar] [CrossRef]
  58. Shrestha, S., & Das, S. (2022). Exploring gender biases in ML and AI academic research through systematic literature review. Frontiers in Artificial Intelligence, 5, 976838. [Google Scholar] [CrossRef]
  59. Strzelecki, A. (2024). Students’ acceptance of ChatGPT in higher education: An extended unified theory of acceptance and use of technology. Innovative Higher Education, 49(2), 223–245. [Google Scholar] [CrossRef]
  60. Sweller, J. (2011). Cognitive load theory. In The psychology of learning and motivation: Cognition in education (Vol. 55, pp. 37–76). Elsevier Academic Press. [Google Scholar] [CrossRef]
  61. Takona, J. P. (2024). Research design: Qualitative, quantitative, and mixed methods approaches/sixth edition. Quality & Quantity, 58(1), 1011–1013. [Google Scholar] [CrossRef]
  62. Tamilmani, K., Rana, N. P., & Dwivedi, Y. K. (2021). Consumer acceptance and use of information technology: A meta-analytic evaluation of UTAUT2. Information Systems Frontiers, 23(4), 987–1005. [Google Scholar] [CrossRef]
  63. Tian, J., & Zhang, R. (2025). Learners’ AI dependence and critical thinking: The psychological mechanism of fatigue and the social buffering role of AI literacy. Acta Psychologica, 260, 105725. [Google Scholar] [CrossRef]
  64. Tlili, A., Bond, M., Bozkurt, A., Arar, K., Chiu, T. K. F., & Rospigliosi, A. (2025). Academic integrity in the generative AI (GenAI) era: A collective editorial response. Interactive Learning Environments, 33(3), 1819–1822. [Google Scholar] [CrossRef]
  65. Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 15. [Google Scholar] [CrossRef]
  66. Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. [Google Scholar] [CrossRef]
  67. Wu, D., Zhang, S., Ma, Z., Yue, X.-G., & Dong, R. K. (2024). Unlocking potential: Key factors shaping undergraduate self-directed learning in AI-enhanced educational environments. Systems, 12(9), 332. [Google Scholar] [CrossRef]
  68. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. [Google Scholar] [CrossRef]
  69. Zou, H., Chan, K. I., Pang, P. C.-I., Manditereza, B., & Shih, Y.-H. (2025). Factors influencing the reported intention of higher vocational computer science students in China to use AI after ethical training: A study in Guangdong Province. Education Sciences, 15(11), 1431. [Google Scholar] [CrossRef]
Figure 1. Research model based on UTAUT2 with a new dimension of ethical awareness.
Figure 1. Research model based on UTAUT2 with a new dimension of ethical awareness.
Education 16 00311 g001
Figure 2. The dual-track instructional framework.
Figure 2. The dual-track instructional framework.
Education 16 00311 g002
Figure 3. Workflow of the AI-integrated curriculum (track A).
Figure 3. Workflow of the AI-integrated curriculum (track A).
Education 16 00311 g003
Figure 4. The cognitive brake mechanism in ethical training (Track B).
Figure 4. The cognitive brake mechanism in ethical training (Track B).
Education 16 00311 g004
Figure 5. Data collection and screening flowchart.
Figure 5. Data collection and screening flowchart.
Education 16 00311 g005
Figure 6. Basic feature of samples.
Figure 6. Basic feature of samples.
Education 16 00311 g006
Figure 7. Residual plot for habit.
Figure 7. Residual plot for habit.
Education 16 00311 g007
Figure 8. Residual plot for behavioral intention.
Figure 8. Residual plot for behavioral intention.
Education 16 00311 g008
Figure 9. Results of the tests after the intervention.
Figure 9. Results of the tests after the intervention.
Education 16 00311 g009
Table 1. Overall phases of the research with theoretical construct and mechanism.
Table 1. Overall phases of the research with theoretical construct and mechanism.
PhaseInstructional Activity/MaterialTheoretical Construct and MechanismKey References
Week
1–3
Manual vs. AI challengePedagogical novelty(Huang et al., 2022)
Students compare manual coding time vs. AI generationTriggering Situational Interest via contrast; Activating Hedonic Motivation.(C.-H. Chen & Chang, 2024)
Red team exerciseMoral sensitivity(Dwivedi et al., 2023)
Probing AI for hallucinations and biasExposing risks to build skepticism.(Tlili et al., 2023)
Week
4–8
Decomposition-Based PromptingCognitive scaffolding(Becker et al., 2023)
Writing logic before codeReducing extrinsic load while maintaining germane load.(Prather et al., 2023)
Week
4–13
AI Disclosure ChecklistCognitive brake (System 2)(Kahneman, 2011)
Mandatory form for every submissionDisrupting automaticity (Habit) to enforce deliberation.(Cotton et al., 2024)
Week
9–13
The AI Auditor TaskHuman-in-the-Loop(Moorhouse et al., 2023)
Fixing buggy AI codeCountering the hygiene factor perception; Building competence.(Mosqueira-Rey et al., 2023)
Week
14–16
Capstone Co-CreationCreative empowerment(Chiu, 2021)
80% AI code allowedReinforcing hedonic motivation through creative output.(Fui-Hoon Nah et al., 2023)
Code of Conduct Co-DesignCognitive legitimacy(Dwivedi et al., 2023)
Class voting on rulesInternalizing norms to support behavioral intention.
Table 2. Construct validity and reliability.
Table 2. Construct validity and reliability.
ConstructItemFactor LoadingCronbach’s AlphaAVEComposite Reliability
Performance expectancy (PE)PE10.7770.8610.5140.863
PE20.752
PE30.743
PE40.681
PE50.699
PE60.639
Effort expectancy (EE)EE10.770.8720.5410.875
EE20.767
EE30.833
EE40.604
EE50.677
EE60.739
Social influence (SI)SI10.7090.8430.4750.844
SI20.702
SI30.638
SI40.752
SI50.659
SI60.668
Hedonic motivation (HM)HM10.810.8810.5560.882
HM20.842
HM30.754
HM40.646
HM50.743
HM60.658
Price value (PV)PV10.7360.890.5470.89
PV20.755
PV30.77
PV40.749
PV50.77
PV60.766
Habit (H)HT10.7850.8910.5830.893
HT20.834
HT30.75
HT40.782
HT50.765
HT60.654
Behavioral intention (BI)BI10.5440.7890.3860.787
BI20.65
BI30.472
BI40.65
BI50.69
BI60.692
Ethical awareness (EA)EA10.6060.7940.4030.797
EA20.661
EA30.393
EA40.721
EA50.715
EA60.655
Table 3. Discriminant validity between constructs.
Table 3. Discriminant validity between constructs.
A V E 2 PEEESIHMPVHTEABI
PE0.717
EE0.573 **0.736
SI0.583 **0.531 **0.689
HM0.660 **0.596 **0.671 **0.746
PV0.507 **0.541 **0.441 **0.579 **0.740
HT0.434 **0.443 **0.591 **0.578 **0.478 **0.764
EA0.235 **0.209 *0.1440.208 *0.263 **0.1210.634
BI0.530 **0.522 **0.543 **0.648 **0.632 **0.517 **0.312 **0.621
** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed).
Table 4. Model fitness test metrics.
Table 4. Model fitness test metrics.
χ2/dfRMSEAGFIAGFICFIIFITLI
1.520.0620.6910.6550.8450.8490.835
Table 5. Hypothesis test results.
Table 5. Hypothesis test results.
HypothesesIndependent ConstructsDependent Constructsr2Path Coefftp-ValueResults
H1Hedonic motivationHabit0.3660.4575.3560.000Supported
H2Hedonic motivationBehavioral intention0.5070.3363.3770.001Supported
H3Ethical AwarenessBehavioral intention0.5070.1662.5940.011Supported
H4Ethical AwarenessHabit0.366−0.032−0.450.653Not supported
H5Performance expectancyBehavioral intention0.5070.0760.8630.39Not supported
H6Effort expectancyBehavioral intention0.5070.1251.5230.13Not supported
H7Social InfluenceBehavioral intention0.5070.0860.9390.349Not supported
H8Price valueHabit0.3660.2222.5650.011supported
H9HabitBehavioral intention0.5070.1622.0120.046Supported
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zou, H.; Chan, K.I.; Pang, P.; Manditereza, B.; Shih, Y.-H. To Use but Not to Depend: Pedagogical Novelty and the Cognitive Brake of Ethical Awareness in Computer Science Students’ Adoption of Generative AI. Educ. Sci. 2026, 16, 311. https://doi.org/10.3390/educsci16020311

AMA Style

Zou H, Chan KI, Pang P, Manditereza B, Shih Y-H. To Use but Not to Depend: Pedagogical Novelty and the Cognitive Brake of Ethical Awareness in Computer Science Students’ Adoption of Generative AI. Education Sciences. 2026; 16(2):311. https://doi.org/10.3390/educsci16020311

Chicago/Turabian Style

Zou, Huiwen, Ka Ian Chan, Patrick Pang, Blandina Manditereza, and Yi-Huang Shih. 2026. "To Use but Not to Depend: Pedagogical Novelty and the Cognitive Brake of Ethical Awareness in Computer Science Students’ Adoption of Generative AI" Education Sciences 16, no. 2: 311. https://doi.org/10.3390/educsci16020311

APA Style

Zou, H., Chan, K. I., Pang, P., Manditereza, B., & Shih, Y.-H. (2026). To Use but Not to Depend: Pedagogical Novelty and the Cognitive Brake of Ethical Awareness in Computer Science Students’ Adoption of Generative AI. Education Sciences, 16(2), 311. https://doi.org/10.3390/educsci16020311

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop