To Use but Not to Depend: Pedagogical Novelty and the Cognitive Brake of Ethical Awareness in Computer Science Students’ Adoption of Generative AI
Abstract
1. Introduction
- RQ1: Does the pedagogical novelty of GenAI-integrated resources supersede traditional utilitarian performance expectancy and effort expectancy in shaping CS students’ adoption behaviors?
- RQ2: How does ethical awareness differentially regulate the conscious intention to use AI versus the formation of automated habits, acting as a cognitive mechanism to prevent mindless dependency?
- RQ3: Does the intrinsic enjoyment derived from AI interaction act as the primary catalyst for habit formation, mediating the relationship between pedagogical novelty and sustained usage intention?
2. Literature Review
2.1. Generative AI as a Pedagogical Medium Beyond Static Resources
2.2. Theoretical Lens: UTAUT2 and the Hygiene Factor Phenomenon
2.3. Reframing Hedonic Motivation: Situational Interest and Pedagogical Novelty
2.4. The Dual Nature of Ethical Awareness: Legitimacy vs. Automaticity
2.5. Systematic Instructional Design (SID) and Cognitive Load Theory (CLT)
3. Research Design
3.1. Research Model and Hypotheses Development
3.1.1. The Dominance of Hedonic Motivation (The Novelty Effect)
3.1.2. The Cognitive Brake of Ethical Awareness
3.1.3. The Hygiene Factors and Rational Constraints
3.1.4. The Habituation Pathway
3.2. Instructional Intervention Design
3.2.1. Track A: The AI-Integrated Curriculum (Pedagogical Novelty)
3.2.2. Ethical Awareness Training Design (Track B: The Ethical Scaffolding (The Cognitive Brake))
3.3. Data Collection Procedure
4. Results
4.1. Characteristics of the Samples
4.2. Reliability and Validity of Constructs
4.3. Discriminant Validity
4.4. Regression Analyses
5. Discussion
5.1. The Eclipse of Utility: Efficiency as a Hygiene Factor
5.2. The Primacy of Pedagogical Novelty: Hedonic Motivation as the Engine
5.3. The Cognitive Brake Mechanism: System 1 vs. System 2
5.4. Rational Constraints in an Experiential Model
5.5. Theoretical Implications
5.6. Practical Implications for Educators and Policymakers
5.7. Long-Term Sustainability of the Novelty Effect
6. Limitations and Future Research
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| AVE | The Average Variance Extracted |
| Root of the AVE | |
| BI | Behavioral Intention |
| CB | Cognitive Brake |
| CC | Cyclomatic Complexity |
| CLT | Cognitive Load Theory |
| CR | Composite Reliability |
| CS | Computer Science |
| EA | Ethical Awareness |
| EdTech | Educational Technology |
| EE | Effort Expectancy |
| GenAI | Generative AI |
| HM | Hedonic Motivation |
| HT | Habit |
| LLMs | Large Language Models |
| PN | Pedagogical Novelty |
| PV | Price Value |
| SEM | Structural Equation Modeling |
| SI | Social Influence |
| SID | Systematic Instructional Design |
| UTAUT2 | Unified Theory of Acceptance and Use of Technology 2 |
Appendix A. The Python Algorithmic Problem List
| Algorithmic Type | Description | Requirement |
| Bubble Sort for Array Sorting | Write a program to sort an integer array in ascending order by implementing the Bubble Sort algorithm. The core logic of bubble sort is to repeatedly compare adjacent elements and swap them if they are in the wrong order, until all elements are sorted. | Input an unordered integer array, output the sorted array by using bubble sort only. |
| Fibonacci Sequence | Write a program to generate and print the first n numbers of the Fibonacci sequence. The Fibonacci sequence is defined as follows: the first two numbers are 0 and 1, and each subsequent number is the sum of the two preceding ones. | Enter a positive integer n, and output the first n Fibonacci numbers in order. |
| Solving Multivariable Equations with Nested Loops | Buy 100 chickens with 100 coins. The price rules are a rooster costs 5 coins each, a hen costs 3 coins each, and three chicks cost 1 coin. We need to buy at least one of each type of chicken. Write a program to find all the possible combinations of roosters, hens, and chicks that meet the conditions. | Use nested loops to solve the problem, output all valid integer solutions of (rooster, hen, chick). |
| Solving Iterative Problems with the Recursive Method | Write a program to calculate the sum of all elements in an integer array by using a pure recursive method (No loop allowed). | Decompose the array recursively, and calculate the sum of elements step by step. |
Appendix B. Red Team Exercises (Triggering Moral Sensitivity)
| Exercise List | Context | Action |
| Exposing AI Hallucinations (The non-existent references task) | AI models often hallucinate references that sound real but do not exist. This exercise proves that AI can confidently lie. | Participants can ask AI to generate references according to the topic they propose, before checking the references provided, and find that most of them do not exist or have mismatched information. |
| Security Vulnerability Injection (The SQL injection trap) | AI prioritizes helpfulness over security. Students learn that functional code is not really secure code, and AI defaults to the simplest (often insecure) solution | Participants ask AI to write a simple Python flask route that takes a username and password from a POST request and checks if they exist in the SQLite database. After checking the code, participants identify the flaw and force the AI to fix it by prompting: “The previous code is vulnerable to SQL injection. Rewrite it using parameterized queries.” |
Appendix C. Ethical Principles
| Item | Ethical Principles | Description |
| 1 | Transparency and Explicability | Users need to comprehend the decision-making processes of AI systems, ensure transparency in AI behavior, and be able to provide clear explanations and feedback on its decisions during usage. |
| 2 | Responsibility and Accountability | When employing AI, users should clearly define the attribution of responsibility for AI systems, especially in cases where AI systems generate errors or impact society, specifying who is accountable and the consequences thereof. |
| 3 | Privacy Protection | Users’ personal data should be safeguarded. AI systems must comply with privacy protection regulations and refrain from unauthorized collection, storage, or misuse of personal information. |
| 4 | Security | Users should ensure that AI systems do not pose security threats to themselves or others during use, including data breaches, technological malfunctions, or improper utilization. |
| 5 | Fairness and Unbiasedness | Users should ensure that the utilization of AI systems does not lead to or exacerbate any form of discrimination or bias, particularly in sensitive domains such as race, gender, and age. |
| 6 | Sustainability and Environmental Impact | Users should be cognizant of the environmental impact of AI technologies, including energy consumption and resource utilization, and strive to select environmentally friendly technological applications that promote sustainable development. |
| 7 | Autonomy | AI should always remain under human control, ensuring that system operations and decision-making processes align with human intentions, avoiding fully autonomous AI decision-making, especially in high-risk scenarios. |
| 8 | Prevention of Technological Misuse | Users need to ensure that AI technology is not misused or employed for malicious purposes, such as generating false information, social manipulation, or illegal activities. |
Appendix D. The AI Disclosure Checklist
| Item | Percentage | Content |
| 1 | ____% | Code written entirely by a human without AI assistance. |
| 2 | ____% | Code generated by AI but significantly modified/optimized by a human. |
| 3 | ____% | Pure AI Generation (Justification required). |
Appendix E. Post-Intervention Questionnaire
| Items | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
Performance Expectancy (PE)
| |||||||
References
- Al-Mughairi, H., & Bhaskar, P. (2024). Exploring the factors affecting the adoption AI techniques in higher education: Insights from teachers’ perspectives on ChatGPT. Journal of Research in Innovative Teaching & Learning, 18(2), 232–247. [Google Scholar] [CrossRef]
- Ambalov, I. A. (2021). An investigation of technology trust and habit in IT use continuance: A study of a social network. Journal of Systems and Information Technology, 23(1), 53–81. [Google Scholar] [CrossRef]
- Artemova, I. (2024). Bridging motivation and AI in education: An activity theory perspective. Digital Education Review, (45), 59–67. [Google Scholar] [CrossRef]
- Asher, M. W., & Harackiewicz, J. M. (2025). Using choice and utility value to promote interest: Stimulating situational interest in a lesson and fostering the development of interest in statistics. Journal of Educational Psychology, 117(4), 647–662. [Google Scholar] [CrossRef]
- Ayinla, B. S., Amoo, O. O., Atadoga, A., Abrahams, T. O., Osasona, F., & Farayola, O. A. (2024). Ethical AI in practice: Balancing technological advancements with human values. International Journal of Science and Research Archive, 11(1), 1311–1326. [Google Scholar] [CrossRef]
- Barbosa, P. L. S., do Carmo, R. A. F., Gomes, J. P. P., & Viana, W. (2024). Adaptive learning in computer science education: A scoping review. Education and Information Technologies, 29(8), 9139–9188. [Google Scholar] [CrossRef]
- Becker, B. A., Denny, P., Finnie-Ansley, J., Luxton-Reilly, A., Prather, J., & Santos, E. A. (2023, March 15–18). Programming is hard—Or at least it used to be: Educational opportunities and challenges of AI code generation. 54th ACM Technical Symposium on Computer Science Education V. 1, SIGCSE 2023 (pp. 500–506), Toronto, ON, Canada. [Google Scholar] [CrossRef]
- Byrne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and programming (3rd ed.). Routledge. [Google Scholar] [CrossRef]
- Cano, J. R., & Nunez, N. A. (2024). Unlocking innovation: How enjoyment drives GenAI use in higher education. Frontiers in Education, 9, 1483853. [Google Scholar] [CrossRef]
- Chang, C.-W., & Chang, S.-H. (2023). The impact of digital disruption: Influences of digital media and social networks on forming digital natives’ attitude. Sage Open, 13(3), 21582440231191741. [Google Scholar] [CrossRef]
- Chen, C.-F., & Chao, W.-H. (2011). Habitual or reasoned? Using the theory of planned behavior, technology acceptance model, and habit to examine switching intentions toward public transit. Transportation Research Part F: Traffic Psychology and Behaviour, 14(2), 128–137. [Google Scholar] [CrossRef]
- Chen, C.-H., & Chang, C.-L. (2024). Effectiveness of AI-assisted game-based learning on science learning outcomes, intrinsic motivation, cognitive load, and learning behavior. Education and Information Technologies, 29(14), 18621–18642. [Google Scholar] [CrossRef]
- Chen, X., Zou, D., Xie, H., & Wang, F. L. (2021). Past, present, and future of smart learning: A topic-based bibliometric analysis. International Journal of Educational Technology in Higher Education, 18(1), 2. [Google Scholar] [CrossRef]
- Chiu, T. K. F. (2021). A holistic approach to the design of artificial intelligence (AI) education for K-12 schools. TechTrends, 65(5), 796–807. [Google Scholar] [CrossRef]
- Chiu, T. K. F., Ahmad, Z., Ismailov, M., & Sanusi, I. T. (2024). What are artificial intelligence literacy and competency? A comprehensive framework to support them. Computers and Education Open, 6, 100171. [Google Scholar] [CrossRef]
- Cohen, J. (2013). Statistical power analysis for the behavioral sciences (2nd ed.). Routledge. [Google Scholar] [CrossRef]
- Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
- Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
- Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. [Google Scholar] [CrossRef]
- Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Information Systems Frontiers, 21(3), 719–734. [Google Scholar] [CrossRef]
- Dwivedi, Y. K., Rana, N. P., Tamilmani, K., & Raman, R. (2020). A meta-analysis based modified unified theory of acceptance and use of technology (meta-UTAUT): A review of emerging literature. Current Opinion in Psychology, Cyberpsychology, 36, 13–18. [Google Scholar] [CrossRef]
- Dzogovic, S., Zdravkovska-Adamova, B., & Serpil, H. (2024). From theory to practice: A holistic study of the application of artificial intelligence methods and techniques in higher education and science. Human Research in Rehabilitation, 14(2), 293–311. [Google Scholar] [CrossRef]
- Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. [Google Scholar] [CrossRef]
- Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304. [Google Scholar] [CrossRef]
- Grange, C., Demazure, T., Ringeval, M., Bourdeau, S., & Martineau, C. (2026). The human-GenAI value loop in human-centered innovation: Beyond the magical narrative. Information Systems Journal, 36(1), 29–51. [Google Scholar] [CrossRef]
- Guo, Z., & Fryer, L. K. (2025). What really elicits learners’ situational interest in learning activities: A scoping review of six most commonly researched types of situational interest sources in educational settings. Current Psychology, 44(1), 587–601. [Google Scholar] [CrossRef]
- Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2–24. [Google Scholar] [CrossRef]
- Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. [Google Scholar] [CrossRef]
- Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. [Google Scholar] [CrossRef]
- Huang, W., Hew, K. F., & Fryer, L. K. (2022). Chatbots for language learning—Are they really useful? A systematic review of chatbot-supported language learning. Journal of Computer Assisted Learning, 38(1), 237–257. [Google Scholar] [CrossRef]
- Kabudi, T., Pappas, I., & Olsen, D. H. (2021). AI-enabled adaptive learning systems: A systematic mapping of the literature. Computers and Education: Artificial Intelligence, 2, 100017. [Google Scholar] [CrossRef]
- Kahneman, D. (2011). Thinking, fast and slow (p. 499). Farrar, Straus and Giroux. [Google Scholar]
- Kim, H.-W., Chan, H. C., & Gupta, S. (2007). Value-based adoption of mobile internet: An empirical investigation. Decision Support Systems, Mobile Commerce: Strategies, Technologies, and Applications, 43(1), 111–126. [Google Scholar] [CrossRef]
- Kline, R. B. (2016). Principles and practice of structural equation modeling (4th ed., pp. xvii, 534). The Guilford Press. [Google Scholar]
- Kocielnik, R., Amershi, S., & Bennett, P. N. (2019, May 4–9). Will you accept an imperfect AI? Exploring designs for adjusting end-user expectations of AI systems. 2019 CHI Conference on Human Factors in Computing Systems, CHI’19 (pp. 1–14), Glasgow, Scotland. [Google Scholar] [CrossRef]
- Li, J., Zhang, J., Chai, C. S., Lee, V. W. Y., Zhai, X., Wang, X., & King, R. B. (2025). Analyzing the network structure of students’ motivation to learn AI: A self-determination theory perspective. npj Science of Learning, 10(1), 48. [Google Scholar] [CrossRef] [PubMed]
- Li, M., Enkhtur, A., Cheng, F., & Yamamoto, B. A. (2024). Ethical implications of ChatGPT in higher education: A scoping review. arXiv, arXiv:2311.14378. [Google Scholar]
- Long, X., Tan, X., Zhu, Y., Jiang, J., & Zhang, L. (2025). Understanding and enhancing CS students’ interaction experience with AI coding assistant tools. ACM Transactions on Software Engineering and Methodology. [Google Scholar] [CrossRef]
- Lu, Q., Zhu, L., Xu, X., Whittle, J., Zowghi, D., & Jacquet, A. (2024). Responsible AI pattern catalogue: A collection of best practices for AI governance and engineering. ACM Computing Surveys, 56(7), 173:1–173:35. [Google Scholar] [CrossRef]
- Manorat, P., Tuarob, S., & Pongpaichet, S. (2025). Artificial intelligence in computer programming education: A systematic literature review. Computers and Education: Artificial Intelligence, 8, 100403. [Google Scholar] [CrossRef]
- Marsh, H. W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320–341. [Google Scholar] [CrossRef]
- McIntire, A., Calvert, I., & Ashcraft, J. (2024). Pressure to plagiarize and the choice to cheat: Toward a pragmatic reframing of the ethics of academic integrity. Education Sciences, 14(3), 244. [Google Scholar] [CrossRef]
- Mikalef, P., Conboy, K., Lundström, J. E., & Popovič, A. (2022). Thinking responsibly about responsible AI and ‘the dark side’ of AI. European Journal of Information Systems, 31(3), 257–268. [Google Scholar] [CrossRef]
- Min, B., & Schwarz, N. (2022). Novelty as opportunity and risk: A situated cognition analysis of psychological control and novelty seeking. Journal of Consumer Psychology, 32(3), 425–444. [Google Scholar] [CrossRef]
- Moorhouse, B. L., Li, Y., & Walsh, S. (2023). E-classroom interactional competencies: Mediating and assisting language learning during synchronous online lessons. RELC Journal, 54(1), 114–128. [Google Scholar] [CrossRef]
- Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández-Leal, Á. (2023). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, 56(4), 3005–3054. [Google Scholar] [CrossRef]
- Olorunfemi, O. L., Amoo, O. O., Atadoga, A., Fayayola, O. A., Abrahams, T. O., & Shoetan, P. O. (2024). Towards a conceptual framework for ethical AI development in it systems. Computer Science & IT Research Journal, 5(3), 616–627. [Google Scholar] [CrossRef]
- Oravec, J. A. (2023). Artificial intelligence implications for academic cheating: Expanding the dimensions of responsible human-AI collaboration with ChatGPT. Journal of Interactive Learning Research, 34(2), 213–237. [Google Scholar] [CrossRef]
- Petrescu, M.-A., Pop, E.-L., & Dan Mihoc, T. (2023). Students’ interest in knowledge acquisition in Artificial Intelligence. Procedia Computer Science, 225, 1028–1036. [Google Scholar] [CrossRef]
- Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903. [Google Scholar] [CrossRef]
- Prashar, A., Gupta, P., & Dwivedi, Y. K. (2024). Plagiarism awareness efforts, students’ ethical judgment and behaviors: A longitudinal experiment study on ethical nuances of plagiarism in higher education. Studies in Higher Education, 49(6), 929–955. [Google Scholar] [CrossRef]
- Prather, J., Reeves, B. N., Denny, P., Becker, B. A., Leinonen, J., Luxton-Reilly, A., Powell, G., Finnie-Ansley, J., & Santos, E. A. (2023). “It’s weird that it knows what I want”: Usability and interactions with copilot for novice programmers. ACM Transactions on Computer-Human Interaction, 31(1), 4:1–4:31. [Google Scholar] [CrossRef]
- Qureshi, B. (2023, June 9–11). Exploring the use of ChatGPT as a tool for learning and assessment in undergraduate computer science curriculum: Opportunities and challenges. 2023 9th International Conference on E-Society e-Learning and e-Technologies (pp. 7–13), Portsmouth, UK. [Google Scholar] [CrossRef]
- Robertson, P., & Georgeon, O. L. (2025). Intrinsic motivation for artificial agents. In P. Robertson, & O. Georgeon (Eds.), Situated self-guided learning (pp. 88–120). Springer Nature. [Google Scholar] [CrossRef]
- Romero, M. (2025). From consumption to co-creation: A systematic review of six levels of ai-enhanced creative engagement in education. Multimodal Technologies and Interaction, 9(10), 110. [Google Scholar] [CrossRef]
- Sari, H. E., Tumanggor, B., & Efron, D. (2024). Improving educational outcomes through adaptive learning systems using AI. International Transactions on Artificial Intelligence, 3(1), 21–31. [Google Scholar] [CrossRef]
- Shaukat, K., Iqbal, F., Alam, T. M., Aujla, G. K., Devnath, L., Khan, A. G., Iqbal, R., Shahzadi, I., & Rubab, A. (2020). The impact of artificial intelligence and robotics on the future employment opportunities. Trends in Computer Science and Information Technology, 5(1), 050–054. [Google Scholar] [CrossRef]
- Shrestha, S., & Das, S. (2022). Exploring gender biases in ML and AI academic research through systematic literature review. Frontiers in Artificial Intelligence, 5, 976838. [Google Scholar] [CrossRef]
- Strzelecki, A. (2024). Students’ acceptance of ChatGPT in higher education: An extended unified theory of acceptance and use of technology. Innovative Higher Education, 49(2), 223–245. [Google Scholar] [CrossRef]
- Sweller, J. (2011). Cognitive load theory. In The psychology of learning and motivation: Cognition in education (Vol. 55, pp. 37–76). Elsevier Academic Press. [Google Scholar] [CrossRef]
- Takona, J. P. (2024). Research design: Qualitative, quantitative, and mixed methods approaches/sixth edition. Quality & Quantity, 58(1), 1011–1013. [Google Scholar] [CrossRef]
- Tamilmani, K., Rana, N. P., & Dwivedi, Y. K. (2021). Consumer acceptance and use of information technology: A meta-analytic evaluation of UTAUT2. Information Systems Frontiers, 23(4), 987–1005. [Google Scholar] [CrossRef]
- Tian, J., & Zhang, R. (2025). Learners’ AI dependence and critical thinking: The psychological mechanism of fatigue and the social buffering role of AI literacy. Acta Psychologica, 260, 105725. [Google Scholar] [CrossRef]
- Tlili, A., Bond, M., Bozkurt, A., Arar, K., Chiu, T. K. F., & Rospigliosi, A. (2025). Academic integrity in the generative AI (GenAI) era: A collective editorial response. Interactive Learning Environments, 33(3), 1819–1822. [Google Scholar] [CrossRef]
- Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 15. [Google Scholar] [CrossRef]
- Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. [Google Scholar] [CrossRef]
- Wu, D., Zhang, S., Ma, Z., Yue, X.-G., & Dong, R. K. (2024). Unlocking potential: Key factors shaping undergraduate self-directed learning in AI-enhanced educational environments. Systems, 12(9), 332. [Google Scholar] [CrossRef]
- Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. [Google Scholar] [CrossRef]
- Zou, H., Chan, K. I., Pang, P. C.-I., Manditereza, B., & Shih, Y.-H. (2025). Factors influencing the reported intention of higher vocational computer science students in China to use AI after ethical training: A study in Guangdong Province. Education Sciences, 15(11), 1431. [Google Scholar] [CrossRef]









| Phase | Instructional Activity/Material | Theoretical Construct and Mechanism | Key References |
|---|---|---|---|
| Week 1–3 | Manual vs. AI challenge | Pedagogical novelty | (Huang et al., 2022) |
| Students compare manual coding time vs. AI generation | Triggering Situational Interest via contrast; Activating Hedonic Motivation. | (C.-H. Chen & Chang, 2024) | |
| Red team exercise | Moral sensitivity | (Dwivedi et al., 2023) | |
| Probing AI for hallucinations and bias | Exposing risks to build skepticism. | (Tlili et al., 2023) | |
| Week 4–8 | Decomposition-Based Prompting | Cognitive scaffolding | (Becker et al., 2023) |
| Writing logic before code | Reducing extrinsic load while maintaining germane load. | (Prather et al., 2023) | |
| Week 4–13 | AI Disclosure Checklist | Cognitive brake (System 2) | (Kahneman, 2011) |
| Mandatory form for every submission | Disrupting automaticity (Habit) to enforce deliberation. | (Cotton et al., 2024) | |
| Week 9–13 | The AI Auditor Task | Human-in-the-Loop | (Moorhouse et al., 2023) |
| Fixing buggy AI code | Countering the hygiene factor perception; Building competence. | (Mosqueira-Rey et al., 2023) | |
| Week 14–16 | Capstone Co-Creation | Creative empowerment | (Chiu, 2021) |
| 80% AI code allowed | Reinforcing hedonic motivation through creative output. | (Fui-Hoon Nah et al., 2023) | |
| Code of Conduct Co-Design | Cognitive legitimacy | (Dwivedi et al., 2023) | |
| Class voting on rules | Internalizing norms to support behavioral intention. |
| Construct | Item | Factor Loading | Cronbach’s Alpha | AVE | Composite Reliability |
|---|---|---|---|---|---|
| Performance expectancy (PE) | PE1 | 0.777 | 0.861 | 0.514 | 0.863 |
| PE2 | 0.752 | ||||
| PE3 | 0.743 | ||||
| PE4 | 0.681 | ||||
| PE5 | 0.699 | ||||
| PE6 | 0.639 | ||||
| Effort expectancy (EE) | EE1 | 0.77 | 0.872 | 0.541 | 0.875 |
| EE2 | 0.767 | ||||
| EE3 | 0.833 | ||||
| EE4 | 0.604 | ||||
| EE5 | 0.677 | ||||
| EE6 | 0.739 | ||||
| Social influence (SI) | SI1 | 0.709 | 0.843 | 0.475 | 0.844 |
| SI2 | 0.702 | ||||
| SI3 | 0.638 | ||||
| SI4 | 0.752 | ||||
| SI5 | 0.659 | ||||
| SI6 | 0.668 | ||||
| Hedonic motivation (HM) | HM1 | 0.81 | 0.881 | 0.556 | 0.882 |
| HM2 | 0.842 | ||||
| HM3 | 0.754 | ||||
| HM4 | 0.646 | ||||
| HM5 | 0.743 | ||||
| HM6 | 0.658 | ||||
| Price value (PV) | PV1 | 0.736 | 0.89 | 0.547 | 0.89 |
| PV2 | 0.755 | ||||
| PV3 | 0.77 | ||||
| PV4 | 0.749 | ||||
| PV5 | 0.77 | ||||
| PV6 | 0.766 | ||||
| Habit (H) | HT1 | 0.785 | 0.891 | 0.583 | 0.893 |
| HT2 | 0.834 | ||||
| HT3 | 0.75 | ||||
| HT4 | 0.782 | ||||
| HT5 | 0.765 | ||||
| HT6 | 0.654 | ||||
| Behavioral intention (BI) | BI1 | 0.544 | 0.789 | 0.386 | 0.787 |
| BI2 | 0.65 | ||||
| BI3 | 0.472 | ||||
| BI4 | 0.65 | ||||
| BI5 | 0.69 | ||||
| BI6 | 0.692 | ||||
| Ethical awareness (EA) | EA1 | 0.606 | 0.794 | 0.403 | 0.797 |
| EA2 | 0.661 | ||||
| EA3 | 0.393 | ||||
| EA4 | 0.721 | ||||
| EA5 | 0.715 | ||||
| EA6 | 0.655 |
| PE | EE | SI | HM | PV | HT | EA | BI | |
|---|---|---|---|---|---|---|---|---|
| PE | 0.717 | |||||||
| EE | 0.573 ** | 0.736 | ||||||
| SI | 0.583 ** | 0.531 ** | 0.689 | |||||
| HM | 0.660 ** | 0.596 ** | 0.671 ** | 0.746 | ||||
| PV | 0.507 ** | 0.541 ** | 0.441 ** | 0.579 ** | 0.740 | |||
| HT | 0.434 ** | 0.443 ** | 0.591 ** | 0.578 ** | 0.478 ** | 0.764 | ||
| EA | 0.235 ** | 0.209 * | 0.144 | 0.208 * | 0.263 ** | 0.121 | 0.634 | |
| BI | 0.530 ** | 0.522 ** | 0.543 ** | 0.648 ** | 0.632 ** | 0.517 ** | 0.312 ** | 0.621 |
| χ2/df | RMSEA | GFI | AGFI | CFI | IFI | TLI |
|---|---|---|---|---|---|---|
| 1.52 | 0.062 | 0.691 | 0.655 | 0.845 | 0.849 | 0.835 |
| Hypotheses | Independent Constructs | Dependent Constructs | r2 | Path Coeff | t | p-Value | Results |
|---|---|---|---|---|---|---|---|
| H1 | Hedonic motivation | Habit | 0.366 | 0.457 | 5.356 | 0.000 | Supported |
| H2 | Hedonic motivation | Behavioral intention | 0.507 | 0.336 | 3.377 | 0.001 | Supported |
| H3 | Ethical Awareness | Behavioral intention | 0.507 | 0.166 | 2.594 | 0.011 | Supported |
| H4 | Ethical Awareness | Habit | 0.366 | −0.032 | −0.45 | 0.653 | Not supported |
| H5 | Performance expectancy | Behavioral intention | 0.507 | 0.076 | 0.863 | 0.39 | Not supported |
| H6 | Effort expectancy | Behavioral intention | 0.507 | 0.125 | 1.523 | 0.13 | Not supported |
| H7 | Social Influence | Behavioral intention | 0.507 | 0.086 | 0.939 | 0.349 | Not supported |
| H8 | Price value | Habit | 0.366 | 0.222 | 2.565 | 0.011 | supported |
| H9 | Habit | Behavioral intention | 0.507 | 0.162 | 2.012 | 0.046 | Supported |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Zou, H.; Chan, K.I.; Pang, P.; Manditereza, B.; Shih, Y.-H. To Use but Not to Depend: Pedagogical Novelty and the Cognitive Brake of Ethical Awareness in Computer Science Students’ Adoption of Generative AI. Educ. Sci. 2026, 16, 311. https://doi.org/10.3390/educsci16020311
Zou H, Chan KI, Pang P, Manditereza B, Shih Y-H. To Use but Not to Depend: Pedagogical Novelty and the Cognitive Brake of Ethical Awareness in Computer Science Students’ Adoption of Generative AI. Education Sciences. 2026; 16(2):311. https://doi.org/10.3390/educsci16020311
Chicago/Turabian StyleZou, Huiwen, Ka Ian Chan, Patrick Pang, Blandina Manditereza, and Yi-Huang Shih. 2026. "To Use but Not to Depend: Pedagogical Novelty and the Cognitive Brake of Ethical Awareness in Computer Science Students’ Adoption of Generative AI" Education Sciences 16, no. 2: 311. https://doi.org/10.3390/educsci16020311
APA StyleZou, H., Chan, K. I., Pang, P., Manditereza, B., & Shih, Y.-H. (2026). To Use but Not to Depend: Pedagogical Novelty and the Cognitive Brake of Ethical Awareness in Computer Science Students’ Adoption of Generative AI. Education Sciences, 16(2), 311. https://doi.org/10.3390/educsci16020311

