Next Article in Journal
DocenTEA: Development and Validation of a Questionnaire to Assess Inclusive Teaching Competence Toward Students with Autism Spectrum Disorder (ASD)
Previous Article in Journal
The Key to Implementing Bilingual Instruction: A Case Study of Bilingual Professional Learning Community
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Factors Influencing the Reported Intention of Higher Vocational Computer Science Students in China to Use AI After Ethical Training: A Study in Guangdong Province

by
Huiwen Zou
1,
Ka Ian Chan
1,
Patrick Cheong-Iao Pang
1,*,
Blandina Manditereza
2 and
Yi-Huang Shih
3
1
Faculty of Applied Sciences, Macao Polytechnic University, Macao 999078, Macau
2
Department of Childhood Education, Faculty of Education, University of the Free State, Bloemfontein 9301, South Africa
3
Center of Teacher Education, Minghsin University of Science and Technology, Hsinchu 304, Taiwan
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(11), 1431; https://doi.org/10.3390/educsci15111431
Submission received: 29 August 2025 / Revised: 18 October 2025 / Accepted: 20 October 2025 / Published: 24 October 2025

Abstract

This paper reports a study conducting an in-depth analysis of the impacts of ethical training on the adoption of AI tools among computer science students in higher vocational colleges. These students will serve as the pivotal human factor for advancing the field of AI. Aiming to explore practical models for integrating AI ethics into computer science education, the research seeks to promote more responsible and effective AI application and therefore become a positive influence in the field. Employing a mixed-methods approach, the study included 105 students aged 20–24 from a vocational college in Guangdong Province, a developed region in China. Based on the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model, a five-point Likert scale was used to evaluate the participants’ perceptions of AI tool usage based on ethical principles. The Structural Equation Modeling (SEM) results indicate that while participants are motivated to adopt AI technologies in certain aspects, performance expectancy negatively impacts their intention and actual usage. After systematically studying and understanding AI ethics, participants attribute a high proportion of responsibility (84.89%) to objective factors and prioritized safety (27.11%) among eight ethical principles. Statistical analysis shows that habit (β = 0.478, p < 0.001) and hedonic motivation (β = 0.239, p = 0.004) significantly influence behavioral intention. Additionally, social influence (β = 0.234, p = 0.008) affects use behavior. Findings regarding factors that influence AI usage can inform a strategic framework for the integration of ethical instruction in AI applications. These findings have significant implications for curriculum design, policy formulation, and the establishment of ethical guidelines for AI deployment in higher educational contexts.

1. Introduction

Education as initiation is a profound concept that suggests that education transcends the mere transmission of knowledge. It emphasizes the role of education in guiding students into deeper understanding of the world, culture, values, and ways of thinking (AlQhtani, 2025; Podgórska & Zdonek, 2024). Meanwhile, the significance and popularity of AI in computer science education are constantly increasing. The integration of AI technology not only enhances pedagogical practices but also facilitates personalized learning and provides optimal allocation of teaching resources (Forero-Corba & Bennasar, 2023). Meanwhile, AI has presented ethical challenges that affect education, including in the areas of data collection, privacy protection, algorithm transparency, and fairness (Machado et al., 2025). With the continuous deepening of AI technology in education, these related ethical issues are more prominent and have become a focal point for academics and educational practitioners. Ethical issues challenge the effectiveness and acceptability of AI technology and impact educational outcomes (Azoulay et al., 2025; Eden et al., 2024; Dieterle et al., 2024).
However, there is limited empirical research in this field, especially in computer science courses in higher education. Computer science students are willing participants in AI software development based on features of their courses (S.-Y. Chen et al., 2022). Therefore, the application of AI-related tools from the perspective of ethics among computer science students is significant because they are educated to become professionals in the use of AI or related products and services; this is a key human factor in the development of AI. This study aims to effectively promote the systematic integration of AI ethics courses in various disciplines and thereby stimulate the sustainable development of AI in the field of education. Through computer course instruction, AI ethics training, data collection, and questionnaire surveys, this research identifies correlations between factors influencing students’ willingness to utilize AI tools, as well as students’ ethical understanding of such tools. By analyzing the application scenarios of AI technology in teaching from a contextual perspective and discussing the ethical issues, this study provides a valuable reference for education policymakers, educators, and AI developers. It can cultivate students’ sensitivity and problem-solving capabilities towards AI ethical issues and thereby support the sustainable development of AI in the field of education. Therefore, this study aims to address the following research questions.
  • RQ1: What are the factors that influence computer science students’ reported intention to use AI in college after ethical training?
  • RQ2: What are the characteristics of participants’ attitudes towards ethical principles and responsibility?

2. Background and Literature Review

Generative artificial intelligence (GenAI), as a transformative force, has rapidly reshaped educational practices globally (Hirpara et al., 2025). This technological leap has sparked dual accounts in education. A core tension in this discourse lies in cultural and institutional variations in GenAI governance. Educational systems across regions differ greatly in how they balance GenAI’s benefits against its risks. Proponents highlight GenAI’s potential to enhance learning efficiency and accessibility (Borah et al., 2024; Li et al., 2025). As a game changer, GenAI can bring various benefits in diverse fields (Dakanalis et al., 2024; Vaishya et al., 2024; Sumbal & Amber, 2025). The Chinese education policy culture has been one of adopting technological advancements and positive responses, thereby facilitating the integration of GenAI technologies within the academic sphere (Zhang & Wang, 2025; Shahzad et al., 2025). In higher education, it can support the development of adaptive learning materials and address diverse student needs (Daher & Hussein, 2024; Qi et al., 2025). Furthermore, GenAI accelerates data analysis, literature synthesis, and hypothesis generation, enabling scholars to focus on high-order intellectual work (Borge et al., 2024; Ding et al., 2025).
On the other hand, critics emphasize the risks to academic integrity and cognitive development (Luo (Jess), 2024). Meanwhile, GenAI presents unprecedented challenges to academic norms and is considered a threat to academic integrity and originality (Rodrigues et al., 2025). Regarding learning performance, students can develop a dependency on algorithmic support that could hold and stunt their metacognitive growth. Conventional assessment models are designed and applied to detect discrepancies between students’ demonstrated knowledge or skills and predefined academic standards, typically focused on identifying gaps in content mastery, procedural errors, or failure to meet performance benchmarks within established curricular frameworks (Levy-Feldman, 2025). However, GenAI systematically undermines this purpose of education by enabling algorithmic imitation of academic performance. Consequently, traditional assessments do not have the capacity to reliably distinguish between actual learning and the work of GenAI. This is not merely a technical inconvenience but a theoretical rupture in educational epistemology (Corbin et al., 2025; Fan et al., 2025). It reflects worries about its capability to decouple assessment from genuine learning (Pan et al., 2025). A global cross-cultural survey shows concerns about GenAI-enabled academic dishonesty, correlating with cultural values that prioritize individual accountability (Yusuf et al., 2024). However, a complete prohibition of GenAI cannot be considered a viable solution, as prohibitions targeting emerging technologies have consistently failed. Instead, such restrictive measures tend to amplify human curiosity and enthusiasm, driving greater interest in exploring and engaging with the very technologies being banned, including GenAI (Tlili et al., 2025). It is imperative to underscore the significance of deliberate and reasoned integration of AI technologies within educational contexts, with the objective of maximizing their benefits while mitigating associated risks (Gao & Wang, 2024).
Therefore, it is necessary to encourage the ethical use of AI tools and demand robust policies that address concerns. The education system on a global scale has not yet been perfect in terms of ethical training and guidance, and education in this area is still relatively neglected (An et al., 2024; Goldenkoff & Cech, 2024). This oversight can be attributed to insufficient awareness regarding the potential hazards posed by AI within an educational framework (Chaudhry & Kazim, 2022). Additionally, the formulation of standardized ethical guidelines for AI education remains a new endeavor for policymakers (Mallik & Gangopadhyay, 2023). This may lead students to ignore ethical norms in practical applications, causing technological abuse, prejudice, and other negative impacts. In 2024, ethical principles, guidelines, and best practices were outlined to address AI’s ethical challenges (Ayinla et al., 2024; Olorunfemi et al., 2024). In 2025, there have been further widespread applications of AI and extensive educational demands alongside substantial governmental support from various nations and regions (T. Liu et al., 2025; Niu et al., 2025). This implies that research and regulation concerning AI ethics have become increasingly urgent and paramount. Early in 2019, global AI ethics guidelines were compiled to establish unified standards (Jobin et al., 2019). These guidelines examined emerging global agreements on principles and analyzed existing ethical AI guidelines. An ethical framework with five principles was proposed to guide AI use (Floridi & Cowls, 2019), including human autonomy, promotion of well-being, justice, interpretability, and responsibility. In 2021, discussions focused on bias and fairness in machine learning, categorizing biases and their mitigation strategies, thereby presenting a solid theoretical foundation for developing fair AI systems (Mehrabi et al., 2021).
However, the mere existence of principles is inadequate to ensure ethical AI use. It is imperative to establish shared objectives, fiduciary duty norms, and robust accountability frameworks to achieve this goal (Heilinger et al., 2024). This is a process that remains arduous and protracted. Nonetheless, education, especially higher education, serves as a transformative force to motivate economic growth, safeguard cultural identity, and foster social cohesion, ultimately contributing to the sustainable development of a nation (M.-K. Chen & Shih, 2025). Moreover, education for prospective AI developers is both impactful and foundational, alongside the high social demand for AI talent. Coupled with the proactive attitude towards AI in China, particularly in economically developed regions, Chinese higher education has further driven ethical AI applications (J. Liu et al., 2024; Liang et al., 2025). As culturally diverse educational institutions, colleges and universities have a pivotal responsibility to align their core functions of serving as leading exemplars for the responsible integration of GenAI (Chan, 2023). Beyond their traditional role of promoting academic excellence across disciplines, colleges and universities are in a position to address the critical goals of mitigating GenAI’s inherent limitations, such as its risk of undermining students’ critical thinking when learners rely heavily on GenAI to complete tasks instead of engaging in independent analysis (Darwin et al., 2024; H. Wang et al., 2024). In addition, higher education can amplify GenAI’s transformative strengths of enhancing personalized learning, facilitating interdisciplinary research, and expanding equitable access to educational resources (Jafari & Keykha, 2024; Jin et al., 2025).
Consequently, with regard to AI utilization and education, higher education institutions can proactively implement effective measures within teaching practices to develop a more balanced and forward-looking curriculum (D. Wang et al., 2025). This can endow students with a competitive advantage in future employment and learning, thereby creating more opportunities for career development and academic growth. It is crucial for students, particularly those on the verge of graduation, to develop an acute awareness of ethical considerations when utilizing AI tools within educational settings and to be afforded opportunities to deepen their understanding of these issues (Liut et al., 2024). For educators, this provides a clear path and a valuable reference in the field of ethical AI teaching and facilitates innovation and progression in teaching (Airaj, 2024). Through such teaching practices, graduates can effectively avoid various potential occupational risks, cultivate an acute ability to identify risks, and thus perform tasks more remarkably in diverse and complex working environments and fields (Portocarrero Ramos et al., 2025). The impact of AI technology in various fields highlights the importance of higher education institutions’ cooperation in introducing AI ethics into computer science curricula to effectively provide students with assistance. This study aims to explore the ethical considerations computer science students face when using AI technology and analyze the effectiveness of implementing ethical training in education in the context of AI.

3. Research Design

This study systematically investigated these factors based on the UTAUT2 model, which was proposed in 2012 as an extended version of the UTAUT model (Venkatesh et al., 2012). They are performance expectancy (PE), effort expectancy (EE), social influence (SI), facilitating conditions (FC), hedonic motivation (HM), price value (PV), habit (HT), behavioral intention (BI), and use behavior (UB).
Unlike the original model, which focuses on organizational contexts, UTAUT2 was advanced to explain and predict individual users’ acceptance and usage behavior in relation to consumer technologies (Venkatesh et al., 2012). It offers a more comprehensive theoretical basis than alternative technology acceptance models, such as the Technology Acceptance Model (TAM). In the context of this study, it allows for the exploration of vocational computer science students’ AI usage intentions. This model provides a comprehensive and in-depth analytical framework for technology acceptance research (Dwivedi et al., 2020; Tamilmani et al., 2021).

3.1. Behavioral Intention

Behavioral intention in the UTAUT2 model is affected by subsequent factors, with performance expectancy being a key consideration, defined as users’ belief that AI products can improve work efficiency or quality of life (Gursoy et al., 2019). Effort expectancy is frequently an influential factor because it reduces the user’s barriers to learning and adaptation (Chao, 2019). Users are more likely to have an intention to use them if important people around them support or recommend the use of AI products (Alalwan et al., 2016), understood as social influence. In this study, the participants were required to utilize free AI tools in their projects, and the facilitating conditions and price value dimensions were excluded from the model. Meanwhile, many studies find no significant link between attitude towards behavior and behavioral intention. However, other studies, such as those utilizing the Theory of Planned Behavior (Ajzen, 1991), find that attitude influences behavioral intention. Due to these inconsistencies, the attitude factor was not considered in this study. Based on the analysis above and the model, as shown in Figure 1, the following hypotheses were formulated from the perspective of behavioral intention.
H1. 
Performance expectancy positively affects behavioral intention.
Perceived usefulness is a keystone of technology acceptance models, including TAM and UTAUT2. If participants view AI as beneficial for their academic performance, they are more likely to develop a positive intention towards its use. This hypothesis is associated with established theories that emphasize the importance of perceived usefulness in motivating technology adoption.
H2. 
Effort expectancy positively affects behavioral intention.
Ease of use is critical for technology acceptance. Even if participants recognize AI’s benefits, they might oppose adoption if they perceive it as difficult to use. This hypothesis ensures that the study accounts for the role of usability in shaping intentions.
H3. 
Social influence positively affects behavioral intention.
Social norms and peer behavior significantly impact individual decisions, particularly in educational settings, where collaboration is common (Ballesteros et al., 2025). Understanding how social influence affects AI adoption helps in designing interventions that leverage positive peer pressure.
H4. 
Hedonic motivation positively affects behavioral intention.
Beyond utility, technology adoption can be driven by the enjoyment or satisfaction derived from use (Becker et al., 2019). In educational contexts with younger users, hedonic factors are particularly relevant, as they often seek engaging and enjoyable experiences.
H5. 
Habit positively affects behavioral intention.

3.2. Use Behavior

Use behavior in the UTAUT2 model is mainly affected by the following factors. The first is behavioral intention. The user’s willingness to use AI products is directly related to use behavior. Strong use intention is more likely to be converted into actual use behavior (Sheppard et al., 1988; Venkatesh et al., 2012). In addition, habits can directly affect use behavior, even bypassing behavioral intention in some cases (Limayem et al., 2007). Although social influence mainly affects use behavior through behavioral intention, it can directly affect use behavior, especially in organizational environments (Dwivedi et al., 2019). Other factors include performance expectancy, effort expectancy, habit, and hedonic motivation, which jointly influence and shape the user’s actual use behavior of AI products. Therefore, the hypotheses were proposed from the angle of use behavior.
H6. 
Habit positively affects use behavior.
As for H5–H6, over time, repeated use of technology leads to habit formation, influencing both intention and actual usage. This hypothesis recognizes the importance of long-term patterns in technology adoption and provides insights into sustained behavior.
H7. 
Behavioral intention positively affects use behavior.
Linking intention to behavior is fundamental in predictive models. Understanding this relationship helps in forecasting how likely individuals are to adopt AI in education based on their intentions.
H8. 
Performance expectancy is positively attributed to use behavior.
H9. 
Hedonic motivation positively affects use behavior.
H10. 
Social influence positively affects use behavior.
H11. 
Effort expectancy positively affects use behavior.
Regarding H8–H11, these hypotheses expand the model by recognizing that various factors can independently influence actual use behavior. This widens the understanding beyond just intention and provides a more holistic view of technology adoption dynamics.

3.3. Data Collection

This study starts from the perspective of AI ethics to design the experimental steps (as shown in Figure 2). The entire data collection and gathering process was a series of effective steps. The project was conducted over 8 weeks, with 3.5 h of lab time per week, totaling 28 h. After intervention and ethics training about AI, students’ perceptions of AI tool usage were assessed in relation to various factors, including performance expectancy, effort expectancy, social influence, hedonic motivation, habit, behavioral intention, and use behavior, as illustrated in Figure 1.

3.3.1. Group Interview and Questionnaire

In this part, data on participant’ backgrounds were collected. Group interviews and questionnaires involving 105 participants from computing-related fields were conducted. Prior to the initiation of this study, all participants were explicitly informed about data security measures and provided informed consent to participate. The study was approved by the college. In Phase 1, as shown in Figure 2, six semi-structured informal group interviews were conducted in a relaxed setting to reduce anxiety and encourage honest responses, as such environments can promote sharing personal experiences and views. Each group had 15–20 participants, with 30 min sessions discussing their understanding of, usage of, and attitudes towards AI tools.
A paper-based questionnaire with three parts, as shown in the Appendix A, Appendix B, Appendix C and Appendix D, was designed according to interview insights and administered under supervision. This approach can achieve a greater number of more stable response than online questionnaires, and participants spend more time on and are more careful with paper surveys (Ceccato et al., 2024; Haas et al., 2021). The first part, covering topics like participants’ demographic information, awareness duration, and frequency of AI tool use, was applied in Phase 1. In this phase, students were required or encouraged to use AI tools in assignments, and no AI ethics training was required. The second group interviews were carried out at the beginning of Phase 2 to go over what had been done in the previous phase in preparation for Phase 2. In Phases 3 and 4, group discussions were arranged among the participants to strengthen the influence of AI ethics training. The second part of the questionnaire, based on UTAUT2 and interview insights, was administered under supervision in Phase 4. Participants completed a 5-point Likert scale with various questions across different dimensions, and the effective recovery rate was 95.45% out of 110 distributed questionnaires. It is essential to acknowledge that the data derived from these sources reflect participants’ self-reported responses, which may not necessarily represent their actual beliefs.

3.3.2. Project Assignment

To ensure consistency in the cognitive complexity of project tasks, in this study, participants were required to develop a project with an acceptable LoC (Wijendra & Hewagamage, 2021), mandating the use of the Python (Version 3.8) programming language to achieve over 400 lines of effective code, while Cyclomatic Complexity (CC) (McCabe, 1976) and Halstead (Khan & Nadeem, 2023) were at a medium level. Specifically, the difficulty of CC and Halstead is reflected in the complexity of the tasks. Participants were allowed to use AI tools flexibly during the project, mirroring real-world work environments and boosting efficiency. This self-directed learning model can empower participants to explore solutions beyond their existing knowledge (Nozari et al., 2024; Gelvanovsky & Saduov, 2025).

3.3.3. Ethics Training

The ethics training module was introduced in Phase 3 (see Figure 2 for details). There were no ethics-related requirements during Phase 1, but, during Phase 2, ethics-related case sharing set the foundation for ethical training during Phase 3. AI ethics was a major focus in this phase. As shown in Figure 3, an AI ethics training program was designed to impart essential principles of AI ethics through four comprehensive modules. Every module discussed ethical issues and followed a consistent structure, including objectives, case examples, group discussions, and conclusions, to ensure a thorough understanding of the topics discussed. Each module began with learning objectives, followed by a case illustration that provided a springboard for group discussion. This highly hands-on, interactive framework enabled participants to engage deeply with the training (Airaj, 2024).
In Phase 4, the third part of the questionnaire was also implemented, with its content primarily focusing on assessing participants’ attitudes towards ethical principles and ethical responsibility based on previous studies (Jobin et al., 2019; Airaj, 2024; Radanliev et al., 2024). As for ethical principles, security (13.61%) and privacy protection (13.50%) were the major principles emphasized and could be listed as one general aspect of safety (27.11%). This reflected an emphasis on data and system protection, while environmental impact (10.65%) was the lowest. This approach, in conjunction with the subsequent application of AI tools in projects, facilitated preparation for data accuracy and collection in Phase 4.
As illustrated in Figure 4, with regard to ethical responsibility, the results indicated that 17.93% and 17.89% of participants, respectively, considered that AI developers and development companies take responsibility. Among all subjects, except for the user part (17.01%), the others can be categorized as external parts, and this part accounted for 84.89%. This result indicated that users place more responsibility on other factors than their own. To some extent, some individuals even demonstrated a greater inclination to blame AI instead of accepting responsibility for their role in the process of using it (Joo, 2024). This provides a clearer picture of the distributed responsibility, highlighting the need for careful and responsible use of AI (Lyons et al., 2023; Radanliev et al., 2024).

4. Results

4.1. Characteristics of the Samples

The participants in this survey were 105 participants from a vocational college. As is typical in computer science studies and employment, where the number of males is greater than the number of females, over 85% of the participants were male in this study, which reflects the real situation (Shrestha & Das, 2022). They had received courses in programming languages, including Python, Java, and C++. As shown in Table 1, over the past 2.5 years, only 4.76% had used AI tools for more than 24 months, while 25% had less than one year of experience. Approximately 33% of the participants had used AI tools for only 1 to 6 months. These findings emphasize that the proportion of participants utilizing AI tools remained relatively low, revealing a notable disparity in application, which additionally assisted in the optimized implementation of AI training so that it was more tailored, accessible, and efficient (Demartini et al., 2024; Strielkowski et al., 2025).

4.2. Reliability and Validity of Constructs

Mplus editor 8 (Version 1.6 (1)) and IMB SPSS R27.0 were used for analyzing the collected data. The measurement model demonstrates acceptable reliability and validity across most constructs. As shown in Table 2, Cronbach’s alpha values were from 0.648 to 0.84, indicating satisfactory internal consistency for the constructs, except for behavioral intention, which had a lower alpha of 0.457. This lower value suggests potential issues with the behavioral intention construct that might require further investigation or refinement of its measurement items. Composite Reliability (CR) values surpassed the recommended threshold of 0.70 for all constructs, except for effort expectancy and behavioral intention, reinforcing the need for cautious interpretation of these results. The Average Variance Extracted (AVE) values for hedonic motivation (0.557) and habit (0.644) exceeded the suggested lowest value of 0.50 (Fornell & Larcker, 1981), showing adequate convergent validity. PE (0.45), effort expectancy (0.389), social influence (0.508), and behavioral intention (0.456) had AVE values below the threshold. However, a higher CR can make up for the slightly lower AVE and maintain the convergent validity of the model (Lam, 2012). A low AVE indicates that the validity of the measurement model may be insufficient, but a relatively high CR suggests that the measurement tool performs well in terms of internal consistency.

4.3. Impact of Key Constructs

As indicated in Table 3, habit emerged as a significant factor, with high factor loadings (0.758–0.842) and strong reliability metrics (Cronbach’s alpha = 0.84, CR = 0.844). The substantial AVE was 0.644. This finding aligns with prior research emphasizing the critical role of habit in technology continuance and acceptance (Limayem et al., 2007). Hedonic motivation showed strong reliability (Cronbach’s alpha = 0.788) and validity (AVE = 0.557), underscoring the importance of enjoyment and pleasure in driving users’ behavioral intention and use behavior. This supports and identifies hedonic motivation as a vital predictor of technology acceptance in consumer contexts (Venkatesh et al., 2012). Social influence showed a moderate impact, with reliability (Cronbach’s alpha = 0.741) and the AVE (0.508) slightly above the threshold (0.5). The factor loadings for PE items varied from 0.664 to 0.684, and effort expectancy items ranged from 0.566 to 0.716, which was acceptable. Use behavior showed mixed results, with high factor loadings for UB1 (0.821) and UB2 (0.83) but lower for UB3 (0.415).

4.4. Discriminant Validity

An assessment of the constructs’ discriminant validity was performed, with the outcomes reported in Table 3. The square root of the AVE ( AVE 2 ) for each construct was compared to the correlations between constructs, and discriminant validity was established when AVE 2 was higher than its correlations with all other constructs (Fornell & Larcker, 1981). The values are reported in Table 3, and, for all constructs, AVE 2 is greater than the correlations with the other constructs. AVE 2 for habit was 0.802, which is greater than correlations with any other construct, including habit and social influence, 0.463* at 0.802. Habit only had weak correlations with other constructs, such as habit and social influence, 0.463*. Hedonic motivation had a AVE 2 of 0.746.
The AVE 2 for performance expectancy was 0.671, while its correlation with hedonic motivation was 0.248*, which remained below it, showing acceptable discriminant validity. The correlation between performance expectancy and effort expectancy was 0.283*, approaching AVE 2 . Additionally, the AVE 2 for behavioral intention was 0.675, with its correlation with habit being 0.639**, which was comparatively close to AVE 2 . This proximity may indicate a high correlation between behavioral intention and habit, potentially signifying measurement overlap.
While habit and hedonic motivation are confirmed as significant predictors, consistent with prior research (Venkatesh et al., 2012), the weaker performance of expectancy and effort expectancy diverges from some studies, which have found these factors to be strong determinants of technology acceptance (Zhou et al., 2010). The noted discrepancy can be attributed to the context-specific nature of technology acceptance, as users may prioritize enjoyment and routine over perceived usefulness and ease of use.

4.5. Regression Analyses

Figure 5 and Figure 6 are used to illustrate the regression model. Two variables are illustrated in each figure: the horizontal axis is the regression standardized predicted value, and the vertical axis is the regression standardized residual. The standardized residual is the result of dividing the residual by its standard error, which allows the size of the residuals of different observations to be compared without being affected by the dimension. The points in the plot are roughly randomly and evenly distributed. This shows that the linear relationship assumed by the regression model is reasonable (Montgomery et al., 2021).

5. Discussion

This study systematically examined the implementation of AI in computer science education, focusing on ethical training and assessing participants’ willingness and actual usage of AI technologies. By employing the UTAUT2 model, the research identifies the factors that influence participants’ behavioral intention and use behavior concerning AI tools within an ethical framework. In this research, SEM was employed to evaluate the proposed hypotheses, with the outcomes of the SEM analysis presented in Figure 7 and Table 4. This approach allows for a comprehensive examination of the complex interplay between technology acceptance and ethical considerations in the educational context.

5.1. Habit as a Predictor of Behavioral Intentions and Use Behavior

The empirical results emphasize the role of habit in influencing participants’ behavioral intention to use AI technologies. As displayed in Table 4 and Figure 7, one of the most significant findings from this study is the strong predictive power of habit for both behavioral intention (β = 0.478, p < 0.001) and use behavior (β = 0.299, p = 0.004). This result underscores the importance of routine engagement and familiarity with AI tools in promoting sustained acceptance and usage of technology in educational settings. The habit formation process aligns with the existing literature on technology adoption, emphasizing that repeated interactions with a technology reduce cognitive load and increase efficiency over time (C.-F. Chen & Chao, 2011; Ambalov, 2021). In the context of computer science education, this implies that intentional efforts to integrate AI tools into daily learning activities can lead to greater acceptance and long-term usage. Moreover, the habit-based model suggests that behavioral intentions are not solely driven by rational evaluations of technology but also by the subconscious influence of repeated behaviors. This finding has important implications for educational policymakers and practitioners, who should consider mechanisms to encourage routine use of AI tools in classrooms.

5.2. Role of Hedonic Motivation

The study highlights the positive influence of hedonic motivation on participants’ behavioral intentions (β = 0.239, p = 0.004), and this indicates that the enjoyment derived from using AI technologies stimulates adoption behaviors. This aligns with prior research demonstrating that younger users, who are typically more open to novel experiences and value engaging interfaces, are particularly influenced by hedonic factors in technology acceptance (Venkatesh et al., 2012). In higher computer science education, where students often interact with cutting-edge technologies, the hedonic motivation factor is likely amplified due to the inherently innovative nature of AI tools. Incorporating elements that enhance enjoyment can foster positive views towards technology use. Educational institutions can leverage this finding by designing AI-driven learning platforms and tools that prioritize user engagement and enjoyment. Specifically, incorporating gamification elements or interactive simulations into AI-based educational resources could increase students’ intrinsic motivation to engage with these technologies. This approach not only fosters positive attitudes towards AI but also aligns with the broader objective of creating a more dynamic and immersive learning environment.

5.3. Ethical Considerations and Lack of Influence from Traditional Predictors

The study’s results demonstrate that traditional predictors of technology acceptance, such as performance expectancy (H1 not supported) and effort expectancy (H2 not supported), are not significant drivers of participants’ behavioral intentions. Performance expectancy, an individual’s belief about how useful a technology is in achieving specific goals, typically encourages technology adoption. However, in this study, performance expectancy has a negative effect on actual use behavior. This finding challenges the assumptions of classical technology adoption models, which prioritize perceived usefulness and ease of use as primary determinants of technology acceptance (Davis, 1989). This unexpected finding of the lack of influence of these factors suggests that ethical concerns may play a more prominent role in shaping participants’ attitudes towards AI tools. Specifically, participants’ reluctance to adopt AI technologies despite their performance benefits could stem from ethical worries related to academic integrity or over-reliance on automated processes. Students might be concerned about the potential for AI to facilitate cheating or diminish their ability to independently solve complex problems. These concerns highlight a nuanced interplay between technology acceptance and ethical awareness in educational contexts. Furthermore, as shown in Table 4, the negative effect of performance expectancy on use behavior (H8 partially supported, β = −0.149, p = 0.057) suggests that even when participants recognize the benefits of AI tools, they may actively limit their usage due to concerns about over-reliance or ethical impropriety (Dinev & Hart, 2006). This finding underscores the importance of addressing both cognitive and emotional factors in fostering acceptance of emerging technologies like AI. Therefore, it is essential to integrate ethical education into technology adoption processes, emphasizing the responsible use of AI as a supplementary tool that enhances learning without overshadowing fundamental skills. Educators should create an environment where students feel empowered to use AI responsibly, reinforcing cultural values of independence while acknowledging the benefits of technological support. Institutions should establish clear, positive guidelines on appropriate AI usage, helping students understand how to leverage these tools ethically and effectively. By considering these factors, educational institutions can better align AI implementation with ethical standards and cultural values, fostering a balanced and informed approach to technology use in learning environments.

5.4. Social Influence and Contextual Factors

As displayed in Table 4, while social influence did not significantly affect participants’ behavioral intentions (H3 not supported), it demonstrated a positive impact on their use behavior (H10 supported, β = 0.234, p = 0.008). This suggests that environmental factors, such as peer usage norms and normative pressures within educational settings, play a crucial role in shaping actual adoption patterns of AI technologies. The distinction between intention and behavior highlights the importance of contextual influences in technology adoption, suggesting that individual attitudes alone may not fully predict real-world usage (Sykes et al., 2009). Considering these findings, it is important for educational institutions to foster a supportive environment where the use of AI tools aligns with both academic expectations and ethical values. Establishing clear guidelines for the appropriate use of AI in learning activities and promoting collaborative projects that emphasize the responsible application of technology could boost students’ confidence in leveraging AI tools.

6. Limitations and Future Research

This study has some limitations worth noting. Firstly, the sample data could be increased and diversified to avoid limited sample sizes with homogeneous populations, which restricts the generalizability of the findings. The elements and quantity of questions from UTAUT2 utilized in this study could also be adjusted to enhance validity and reliability. Furthermore, there is now substantial research on AI that can serve as a comparative reference prior to the implementation of ethical training. Meanwhile, the post-intervention data possess inherent research value. Therefore, this study did not incorporate additional comparative group data. However, future efforts could attempt to collect data from a non-interference group to amplify comparative effects across an extensive sample.

7. Conclusions

This research offers valuable insights into the effects of ethical training on AI tool utilization. The results highlight several pivotal determinants that markedly affect the deployment of AI tools within the ethical training paradigm. Notably, ethical training in AI enhances, rather than diminishes, user intentions to employ AI tools. This indicates that ethical integration can be effectively incorporated into AI education without hindering adoption. Our study illustrates that after ethics training, participants tend to attribute responsibility to external factors rather than their own accountability.
The findings yield several significant implications. Firstly, it highlights the importance of reinforcing key determinants, such as habit formation, enjoyment, and social influence. This approach can inform curriculum design and policy development in higher educational settings to ensure the inclusion of ethical considerations in AI education. Secondly, the outcomes emphasize the essential role of aligning AI tools with ethical principles for the sustainable progression of computer science education. Thirdly, the implementation of ethical frameworks in AI training facilitates AI integration, potentially enhancing operational efficiency and reducing resistance to technological advancements. This strategy not only encourages the adoption of AI tools but also ensures their responsible deployment.
In summary, this research adds to the expanding knowledge on AI ethics education and its influence on technology adoption. It offers support for developing more effective and responsible AI education programs in higher vocational contexts. As AI continues to evolve, equipping the next generation of computer professionals with both technical proficiency and ethical understanding will be significant for the responsible advancement and deployment of AI technologies.

Author Contributions

Conceptualization, H.Z. and P.C.-I.P.; methodology, H.Z., B.M. and Y.-H.S.; software, K.I.C.; validation, P.C.-I.P. and Y.-H.S.; formal analysis, H.Z. and K.I.C.; investigation, B.M.; resources, H.Z. and P.C.-I.P.; data curation, K.I.C., B.M. and Y.-H.S.; writing—original draft, H.Z.; writing—review and editing, P.C.-I.P.; visualization, H.Z.; supervision, P.C.-I.P.; project administration, P.C.-I.P.; funding acquisition, P.C.-I.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Macao Science and Technology Development Fund (FDCT; funding ID: 0032/2025/ITP1) and a Macao Polytechnic University research grant (project code: RP/FCA-10/2022; submission code: fca.2b5e.e21e.9).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Pedagogic Committee of Faculty of Applied Sciences of the Macao Polytechnic University (protocol code HEA002-FCA-2024 and date: 27 May 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
PEPerformance Expectancy
EEEffort Expectancy
SISocial Influence
FCFacilitating Conditions
HMHedonic Motivation
PVPrice Value
HTHabit
BIBehavioral Intention
UBUse Behavior
LoCLines of Code
CCCyclomatic Complexity
CRComposite Reliability
AVEThe Average Variance Extracted
AVE 2 Root of the AVE
UTAUT2Tourism Management Information Systems

Appendix A. Pre-Intervention Questionnaire

Items
  • Name, Age, Gender, Which Year
  • How long have you been familiar with AI tools?
  • How long have you used the AI tools?
  • Have you taken or studied any courses related to AI ethics?

Appendix B. Group Interview Questions

Items
  • Do you enjoy using AI tools?
  • Have you used AI tools for things like writing code or reports?
  • What part of AI tools helped you learn the most, and can you give an example to share with us?
  • To what extent do you perceive academic misconduct as relevant to your voca-tional coursework? Please explain your reasoning. (Risks)
  • For vocational students like you who want IT jobs, like making software, how does using AI in your classes help you get ready for work? (Related to future career)
  • What one change to the curriculum (like assignments, training, or tools) would help you use AI the right way and make it work better for you? (Recommendations)

Appendix C. Post-Intervention Questionnaire-1

Items12345
Performance Expectancy
  • I believe using AI technology ethically can enhance my learning efficiency.
  • Using AI technology in a moral manner helps me achieve better academic results.
  • Ethically using AI technology can improve my professional skills
Effort Expectancy
  • Learning to use AI ethically is easy for me
  • Following AI ethics rules isn’t complicated for me
  • I think using AI ethically doesn’t take much effort
Social Influence
  • My classmates believe AI should be used ethically
  • Teachers encourage me to consider ethics when using AI
  • Society pushes me to use AI ethically
Hedonic Motivation
  • Using AI ethically makes me feel satisfied and at ease
  • I enjoy exploring how to use AI ethically
  • Using AI ethically makes learning more fun
Habit
  • I habitually consider ethics when using AI
  • Using AI ethically has become natural for me
  • I follow AI ethics rules unconsciously
Behavioral Intention
  • I plan to keep using AI ethically in the future
  • I’m willing to follow AI ethics guidelines
  • I intend to learn more about ethical AI use
Use Behavior
  • I currently use AI tools and technologies ethically
  • I follow ethical standards for AI in projects and assignments
  • I avoid any unethical AI usages

Appendix D. Post-Intervention Questionnaire-2

Items12345
Who do you think should bear the main responsibility for ensuring that AI is used ethically?
  • Government
  • AI developer
  • Individual user
  • Software company
  • Educational institution
  • Intermediary agency
Which of the following AI ethical principles are you more concerned about?
  • Transparency
  • Responsibility
  • Privacy
  • Security
  • Fairness
  • Environmental
  • Autonomy
  • Abuse/Misuse

References

  1. Airaj, M. (2024). Ethical artificial intelligence for teaching-learning in higher education. Education and Information Technologies, 29(13), 17145–17167. [Google Scholar] [CrossRef]
  2. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. [Google Scholar] [CrossRef]
  3. Alalwan, A. A., Dwivedi, Y. K., Rana, N. P. P., & Williams, M. D. (2016). Consumer adoption of mobile banking in Jordan: Examining the role of usefulness, ease of use, perceived risk and self-efficacy. Journal of Enterprise Information Management, 29(1), 118–139. [Google Scholar] [CrossRef]
  4. AlQhtani, F. M. (2025). Knowledge management for research innovation in universities for sustainable development: A qualitative approach. Sustainability, 17(6), 2481. [Google Scholar] [CrossRef]
  5. Ambalov, I. A. (2021). An investigation of technology trust and habit in IT use continuance: A study of a social network. Journal of Systems and Information Technology, 23(1), 53–81. [Google Scholar] [CrossRef]
  6. An, Q., Yang, J., Xu, X., Zhang, Y., & Zhang, H. (2024). Decoding AI ethics from users’ lens in education: A systematic review. Heliyon, 10(20), e39357. [Google Scholar] [CrossRef] [PubMed]
  7. Ayinla, B. S., Amoo, O. O., Atadoga, A., Abrahams, T. O., Osasona, F., & Farayola, O. A. (2024). Ethical AI in practice: Balancing technological advancements with human values. International Journal of Science and Research Archive, 11(1), 1311–1326. [Google Scholar] [CrossRef]
  8. Azoulay, R., Hirst, T., & Reches, S. (2025). Large language models in computer science classrooms: Ethical challenges and strategic solutions. Applied Sciences, 15(4), 1793. [Google Scholar] [CrossRef]
  9. Ballesteros, M. A., Acosta-Enriquez, B. G., Valle, M. d. l. Á. G., Morales-Angaspilco, J. E., Callejas Torres, J. C., Luján López, J. E., Blanco-García, L. E., García Juárez, H. D., & Jordan, O. H. (2025). The influence of social norms and word-of-mouth marketing on behavioral intention and behavioral use of generative AI chatbots among university students. Computers in Human Behavior Reports, 19, 100760. [Google Scholar] [CrossRef]
  10. Becker, S., Bräscher, A.-K., Bannister, S., Bensafi, M., Calma-Birling, D., Chan, R. C., Eerola, T., Ellingsen, D.-M., Ferdenzi, C., Hanson, J. L., Joffily, M., Lidhar, N. K., Lowe, L. J., Martin, L. J., Musser, E. D., Noll-Hussong, M., Olino, T. M., Lobo, R. P., & Wang, Y. (2019). The role of hedonics in the human affectome. Neuroscience and Biobehavioral Reviews, 102, 221. [Google Scholar] [CrossRef] [PubMed]
  11. Borah, A. R., Nischith, T. N., & Gupta, S. (2024, January 4–6). Improved learning based on GenAI. 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT) (pp. 1527–1532), Bengaluru, India. [Google Scholar] [CrossRef]
  12. Borge, M., Smith, B. K., & Aldemir, T. (2024). Using generative ai as a simulation to support higher-order thinking. International Journal of Computer-Supported Collaborative Learning, 19(4), 479–532. [Google Scholar] [CrossRef]
  13. Ceccato, V., Gliori, G., Näsman, P., & Sundling, C. (2024). Comparing responses from a paper-based survey with a web-based survey in environmental criminology. Crime Prevention and Community Safety, 26(2), 216–243. [Google Scholar] [CrossRef]
  14. Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 1–25. [Google Scholar] [CrossRef]
  15. Chao, C.-M. (2019). Factors determining the behavioral intention to use mobile learning: An application and extension of the UTAUT model. Frontiers in Psychology, 10, 1652. [Google Scholar] [CrossRef] [PubMed]
  16. Chaudhry, M. A., & Kazim, E. (2022). Artificial Intelligence in Education (AIEd): A high-level academic and industry note 2021. AI and Ethics, 2(1), 157–165. [Google Scholar] [CrossRef] [PubMed]
  17. Chen, C.-F., & Chao, W.-H. (2011). Habitual or reasoned? Using the theory of planned behavior, technology acceptance model, and habit to examine switching intentions toward public transit. Transportation Research Part F: Traffic Psychology and Behaviour, 14(2), 128–137. [Google Scholar] [CrossRef]
  18. Chen, M.-K., & Shih, Y.-H. (2025). The role of higher education in sustainable national development: Reflections from an international perspective. Edelweiss Applied Science and Technology, 9(4), 1343–1351. Available online: https://ideas.repec.org//a/ajp/edwast/v9y2025i4p1343-1351id6262.html (accessed on 1 October 2025). [CrossRef]
  19. Chen, S.-Y., Su, Y.-S., Ku, Y.-Y., Lai, C.-F., & Hsiao, K.-L. (2022). Exploring the factors of students’ intention to participate in AI software development. Library Hi Tech, 42(2), 392–408. [Google Scholar] [CrossRef]
  20. Corbin, T., Bearman, M., Boud, D., & Dawson, P. (2025). The wicked problem of AI and assessment. Assessment & Evaluation in Higher Education, 1–17. [Google Scholar] [CrossRef]
  21. Daher, W., & Hussein, A. (2024). Higher education students’ perceptions of GenAI tools for learning. Information, 15(7), 416. [Google Scholar] [CrossRef]
  22. Dakanalis, A., Wiederhold, B. K., & Riva, G. (2024). Artificial intelligence: A game-changer for mental health care. Cyberpsychology, Behavior, and Social Networking, 27, 100–104. [Google Scholar] [CrossRef] [PubMed]
  23. Darwin, Rusdin, D., Mukminatien, N., Suryati, N., Laksmi, E. D., & Marzuki. (2024). Critical thinking in the AI era: An exploration of EFL students’ perceptions, benefits, and limitations. Cogent Education, 11(1). [Google Scholar] [CrossRef]
  24. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  25. Demartini, C. G., Sciascia, L., Bosso, A., & Manuri, F. (2024). Artificial intelligence bringing improvements to adaptive learning in education: A case study. Sustainability, 16(3), 1347. [Google Scholar] [CrossRef]
  26. Dieterle, E., Dede, C., & Walker, M. (2024). The cyclical ethical effects of using artificial intelligence in education. AI & Society, 39(2), 633–643. [Google Scholar] [CrossRef]
  27. Dinev, T., & Hart, P. (2006). An extended privacy calculus model for e-commerce transactions. Information Systems Research, 17(1), 61–80. [Google Scholar] [CrossRef]
  28. Ding, Z., Brachman, M., Chan, J., & Geyer, W. (2025, June 23–25). “The diagram is like guardrails”: Structuring GenAI-assisted hypotheses exploration with an interactive shared representation. 2025 Conference on Creativity and Cognition (pp. 606–625), Virtual, UK. [Google Scholar] [CrossRef]
  29. Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Information Systems Frontiers, 21(3), 719–734. [Google Scholar] [CrossRef]
  30. Dwivedi, Y. K., Rana, N. P., Tamilmani, K., & Raman, R. (2020). A meta-analysis based modified unified theory of acceptance and use of technology (meta-UTAUT): A review of emerging literature. Current Opinion in Psychology, 36, 13–18. [Google Scholar] [CrossRef]
  31. Eden, C. A., Chisom, O. N., & Adeniyi, I. S. (2024). Integrating AI in education: Opportunities, challenges, and ethical considerations. Magna Scientia Advanced Research and Reviews, 10(2), 006–013. [Google Scholar] [CrossRef]
  32. Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X., & Gašević, D. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56(2), 489–530. [Google Scholar] [CrossRef]
  33. Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). [Google Scholar] [CrossRef]
  34. Forero-Corba, W., & Bennasar, F. N. (2023). Techniques and applications of machine learning and artificial intelligence in education: A systematic review. RIED-Revista Iberoamericana de Educación a Distancia, 27(1), 209–253. [Google Scholar] [CrossRef]
  35. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. [Google Scholar] [CrossRef]
  36. Gao, J., & Wang, D. (2024). Quantifying the use and potential benefits of artificial intelligence in scientific research. Nature Human Behaviour, 8(12), 2281–2292. [Google Scholar] [CrossRef]
  37. Gelvanovsky, G., & Saduov, R. (2025). The impact of ChatGPT on academic writing instruction for computer science students. In G. Jezic, Y.-H. Chen-Burger, M. Kušek, R. Šperka, R. J. Howlett, & L. C. Jain (Eds.), Agents and multi-agent systems: Technologies and applications 2024 (pp. 271–281). Springer Nature. [Google Scholar] [CrossRef]
  38. Goldenkoff, E., & Cech, E. A. (2024). Left on their own: Confronting absences of AI ethics training among engineering master’s students. Available online: https://peer.asee.org/left-on-their-own-confronting-absences-of-ai-ethics-training-among-engineering-master-s-students (accessed on 1 September 2025).
  39. Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157–169. [Google Scholar] [CrossRef]
  40. Haas, G.-C., Eckman, S., & Bach, R. (2021). Comparing the response burden between paper and web modes in establishment surveys. Journal of Official Statistics, 37(4), 907–930. [Google Scholar] [CrossRef]
  41. Heilinger, J.-C., Kempt, H., & Nagel, S. (2024). Beware of sustainable AI! Uses and abuses of a worthy goal. AI and Ethics, 4(2), 201–212. [Google Scholar] [CrossRef]
  42. Hirpara, N., Weber, M., Szakonyi, A., Cardona, T., & Singh, D. (2025). Exploring the role of GenAI in shaping education. In Intelligent human computer interaction (pp. 121–134). Springer. [Google Scholar] [CrossRef]
  43. Jafari, F., & Keykha, A. (2024). Identifying the opportunities and challenges of artificial intelligence in higher education: A qualitative study. Journal of Applied Research in Higher Education, 16(4), 1228–1245. [Google Scholar] [CrossRef]
  44. Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2025). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Computers and Education: Artificial Intelligence, 8, 100348. [Google Scholar] [CrossRef]
  45. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. [Google Scholar] [CrossRef]
  46. Joo, M. (2024). It’s the AI’s fault, not mine: Mind perception increases blame attribution to AI. PLoS ONE, 19(12), e0314559. [Google Scholar] [CrossRef]
  47. Khan, B., & Nadeem, A. (2023). Evaluating the effectiveness of decomposed Halstead Metrics in software fault prediction. PeerJ Computer Science, 9, e1647. [Google Scholar] [CrossRef]
  48. Lam, L. W. (2012). Impact of competitiveness on salespeople’s commitment and performance. Journal of Business Research, 65(9), 1328–1334. [Google Scholar] [CrossRef]
  49. Levy-Feldman, I. (2025). The role of assessment in improving education and promoting educational equity. Education Sciences, 15(2), 224. [Google Scholar] [CrossRef]
  50. Li, H., Wang, Y., Luo, S., & Huang, C. (2025). The influence of GenAI on the effectiveness of argumentative writing in higher education: Evidence from a quasi-experimental study in China. Journal of Asian Public Policy, 18, 405–430. [Google Scholar] [CrossRef]
  51. Liang, H., Fan, J., & Wang, Y. (2025). Artificial intelligence, technological innovation, and employment transformation for sustainable development: Evidence from China. Sustainability, 17(9), 3842. [Google Scholar] [CrossRef]
  52. Limayem, M., Hirt, S. G., & Cheung, C. M. K. (2007). How habit limits the predictive power of intention: The case of information systems continuance. MIS Quarterly, 31(4), 705–737. [Google Scholar] [CrossRef]
  53. Liu, J., Chen, K., & Lyu, W. (2024). Embracing artificial intelligence in the labour market: The case of statistics. Humanities and Social Sciences Communications, 11(1), 1112. [Google Scholar] [CrossRef]
  54. Liu, T., Luo, Y. T., Pang, P. C.-I., & Kan, H. Y. (2025). Exploring the impact of information and communication technology on educational administration: A systematic scoping review. Education Sciences, 15(9), 1114. [Google Scholar] [CrossRef]
  55. Liut, M., Ly, A., Xu, J. J.-N., Banson, J., Vrbik, P., & Hardin, C. D. (2024, March 20–23). “I didn’t know”: Examining student understanding of academic dishonesty in computer science. 55th ACM Technical Symposium on Computer Science Education V. 1 (pp. 757–763), Portland, OR, USA. [Google Scholar] [CrossRef]
  56. Luo (Jess), J. (2024). A critical review of GenAI policies in higher education assessment: A call to reconsider the “originality” of students’ work. Assessment & Evaluation in Higher Education, 49, 651–664. [Google Scholar] [CrossRef]
  57. Lyons, J. B., Hobbs, K., Rogers, S., & Clouse, S. H. (2023). Responsible (use of) AI. Frontiers in Neuroergonomics, 4, 1201777. [Google Scholar] [CrossRef]
  58. Machado, H., Silva, S., & Neiva, L. (2025). Publics’ views on ethical challenges of artificial intelligence: A scoping review. AI and Ethics, 5(1), 139–167. [Google Scholar] [CrossRef]
  59. Mallik, S., & Gangopadhyay, A. (2023). Proactive and reactive engagement of artificial intelligence methods for education: A review. Frontiers in Artificial Intelligence, 6, 1151391. [Google Scholar] [CrossRef]
  60. McCabe, T. J. (1976). A complexity measure. IEEE Transactions on Software Engineering, SE-2(4), 308–320. [Google Scholar] [CrossRef]
  61. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 115:1–115:35. [Google Scholar] [CrossRef]
  62. Montgomery, D. C., Peck, E. A., & Vining, G. G. (2021). Introduction to linear regression analysis. Wiley. [Google Scholar]
  63. Niu, T., Liu, T., Luo, Y. T., Pang, P. C.-I., Huang, S., & Xiang, A. (2025). Decoding student cognitive abilities: A comparative study of explainable AI algorithms in educational data mining. Scientific Reports, 15(1), 26862. [Google Scholar] [CrossRef]
  64. Nozari, H., Ghahremani-Nahr, J., & Szmelter-Jarosz, A. (2024). Chapter one—AI and machine learning for real-world problems. In S. Kim, & G. C. Deka (Eds.), Advances in computers (Vol. 134, pp. 1–12). Elsevier. [Google Scholar] [CrossRef]
  65. Olorunfemi, O. L., Amoo, O. O., Atadoga, A., Fayayola, O. A., Abrahams, T. O., & Shoetan, P. O. (2024). Towards a conceptual framework for ethical AI development in it systems. Computer Science & IT Research Journal, 5(3), 616–627. [Google Scholar] [CrossRef]
  66. Pan, S., Goodnight, G. T., Zhao, X., Wang, Y., Xie, L., & Zhang, J. (2025). “Game changer”: The AI advocacy discourse of 2023 in the US. AI & Society, 40(4), 2807–2819. [Google Scholar] [CrossRef]
  67. Podgórska, M., & Zdonek, I. (2024). Interdisciplinary collaboration in higher education towards sustainable development. Sustainable Development, 32(3), 2085–2103. [Google Scholar] [CrossRef]
  68. Portocarrero Ramos, H. C., Cruz Caro, O., Sánchez Bardales, E., Quiñones Huatangari, L., Campos Trigoso, J. A., Maicelo Guevara, J. L., & Chávez Santos, R. (2025). Artificial intelligence skills and their impact on the employability of university graduates. Frontiers in Artificial Intelligence, 8, 1629320. [Google Scholar] [CrossRef] [PubMed]
  69. Qi, J., Liu, J., & Xu, Y. (2025). The role of individual capabilities in maximizing the benefits for students using GenAI tools in higher education. Behavioral Sciences, 15(3), 328. [Google Scholar] [CrossRef] [PubMed]
  70. Radanliev, P., Santos, O., Brandon-Jones, A., & Joinson, A. (2024). Ethics and responsible AI deployment. Frontiers in Artificial Intelligence, 7, 1377011. [Google Scholar] [CrossRef] [PubMed]
  71. Rodrigues, M., Silva, R., Borges, A. P., Franco, M., & Oliveira, C. (2025). Artificial intelligence: Threat or asset to academic integrity? A bibliometric analysis. Kybernetes, 54(5), 2939–2970. [Google Scholar] [CrossRef]
  72. Shahzad, M. F., Xu, S., An, X., Asif, M., & Javed, I. (2025). From policy to practice: A thematic analysis of generative AI technologies in China’s education sector. Interactive Learning Environments, 1–20. [Google Scholar] [CrossRef]
  73. Sheppard, B. H., Hartwick, J., & Warshaw, P. R. (1988). The theory of reasoned action: A meta-analysis of past research with recommendations for modifications and future research. Journal of Consumer Research, 15(3), 325–343. [Google Scholar] [CrossRef]
  74. Shrestha, S., & Das, S. (2022). Exploring gender biases in ML and AI academic research through systematic literature review. Frontiers in Artificial Intelligence, 5, 976838. [Google Scholar] [CrossRef]
  75. Strielkowski, W., Grebennikova, V., Lisovskiy, A., Rakhimova, G., & Vasileva, T. (2025). AI-driven adaptive learning for sustainable educational transformation. Sustainable Development, 33(2), 1921–1947. [Google Scholar] [CrossRef]
  76. Sumbal, M. S., & Amber, Q. (2025). ChatGPT: A game changer for knowledge management in organizations. Kybernetes, 54(6), 3217–3237. [Google Scholar] [CrossRef]
  77. Sykes, T. A., Venkatesh, V., & Gosain, S. (2009). Model of acceptance with peer support: A social network perspective to understand employees’ system use. MIS Quarterly, 33(2), 371–393. [Google Scholar] [CrossRef]
  78. Tamilmani, K., Rana, N. P., & Dwivedi, Y. K. (2021). Consumer acceptance and use of information technology: A meta-analytic evaluation of UTAUT2. Information Systems Frontiers, 23(4), 987–1005. [Google Scholar] [CrossRef]
  79. Tlili, A., Bond, M., Bozkurt, A., Arar, K., Chiu, T. K. F., & Rospigliosi, P. A. (2025). Academic integrity in the generative AI (GenAI) era: A collective editorial response. Interactive Learning Environments, 33, 1819–1822. [Google Scholar] [CrossRef]
  80. Vaishya, R., Dhall, S., & Vaish, A. (2024). Artificial Intelligence (AI): A potential game changer in regenerative orthopedics—A scoping review. Indian Journal of Orthopaedics, 58(10), 1362–1374. [Google Scholar] [CrossRef]
  81. Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. [Google Scholar] [CrossRef]
  82. Wang, D., Dong, X., & Zhong, J. (2025). Enhance college AI course learning experience with constructivism-based blog assignments. Education Sciences, 15(2), 217. [Google Scholar] [CrossRef]
  83. Wang, H., Dang, A., Wu, Z., & Mac, S. (2024). Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Computers and Education: Artificial Intelligence, 7, 100326. [Google Scholar] [CrossRef]
  84. Wijendra, D. R., & Hewagamage, K. P. (2021). Analysis of cognitive complexity with cyclomatic complexity metric of software. International Journal of Computer Applications, 174(19), 14–19. [Google Scholar] [CrossRef]
  85. Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21(1), 21. [Google Scholar] [CrossRef]
  86. Zhang, R., & Wang, J. (2025). Perceptions, adoption intentions, and impacts of generative AI among Chinese university students. Current Psychology, 44(11), 11276–11295. [Google Scholar] [CrossRef]
  87. Zhou, T., Lu, Y., & Wang, B. (2010). Integrating TTF and UTAUT to explain mobile banking user adoption. Computers in Human Behavior, 26(4), 760–767. [Google Scholar] [CrossRef]
Figure 1. Research model within the context of ethical training.
Figure 1. Research model within the context of ethical training.
Education 15 01431 g001
Figure 2. Study’s experimental steps and data collection procedures.
Figure 2. Study’s experimental steps and data collection procedures.
Education 15 01431 g002
Figure 3. AI ethics training modules applied in this study.
Figure 3. AI ethics training modules applied in this study.
Education 15 01431 g003
Figure 4. Ethical responsibility subjects and principles.
Figure 4. Ethical responsibility subjects and principles.
Education 15 01431 g004
Figure 5. Residual plot for use behavior.
Figure 5. Residual plot for use behavior.
Education 15 01431 g005
Figure 6. Residual plot for behavioral intention.
Figure 6. Residual plot for behavioral intention.
Education 15 01431 g006
Figure 7. Results of the tests after ethical training.
Figure 7. Results of the tests after ethical training.
Education 15 01431 g007
Table 1. Characteristics of the sample.
Table 1. Characteristics of the sample.
VariableN%
Gender
Female1514.29%
Male9085.71%
Age
204241.18%
214241.18%
221312.75%
2332.94%
2421.96%
Time Using AI
≤1 month2725.71%
>1 and ≤6 months3432.38%
>6 and ≤12 months1615.24%
>12 and ≤24 months2321.90%
>24 months54.76%
Time Knowing of AI
≤1 month43.81%
>1 and ≤6 months2523.81%
>6 and ≤12 months2523.81%
>12 and ≤24 months3533.33%
>24 months1615.24%
Table 2. Construct validity and reliability.
Table 2. Construct validity and reliability.
ConstructItemFactor LoadingCronbach’s AlphaAVECR
Performance expectancyPE10.6640.7080.450.71
PE20.684
PE30.664
Effort
expectancy
EE10.5660.6480.3890.654
EE20.716
EE30.578
Social
influence
SI10.6290.7410.5080.753
SI20.672
SI30.822
Hedonic
motivation
HM10.7090.7880.5570.79
HM20.78
HM30.748
HabitHT10.7580.840.6440.844
HT20.842
HT30.805
Behavioral intentionBI10.7470.4570.4560.714
BI20.670
BI30.602
Use
behavior
UB10.8210.7240.5120.744
UB20.830
UB30.415
Table 3. Discriminant validity between constructs of using AI and ethical considerations.
Table 3. Discriminant validity between constructs of using AI and ethical considerations.
VariablePEEESIHMHTBIUB
PE0.671
EE0.283 *0.624
SI0.1420.397 **0.712
HM0.248 *0.387 **0.1830.746
HT0.221 *0.346 **0.463 **0.377 **0.802
BI0.250 *0.418 **0.358 **0.488 **0.639 **0.675
UB0.0570.374 **0.484 **0.375 **0.589 **0.550 **0.716
* p < 0.05; ** p < 0.01.
Table 4. Hypotheses test results.
Table 4. Hypotheses test results.
Hypotheses Independent ConstructsDependent Constructsr2βtp-Value Results
H1Performance expectancyBehavioral intention0.5010.0420.5610.576Not supported
H2Effort expectancyBehavioral intention 0.1351.6020.112Not supported
H3Social influenceBehavioral intention 0.0330.3990.691Not supported
H4Hedonic motivationBehavioral intention 0.2392.9510.004Supported
H5HabitBehavioral intention 0.4785.5690.000Supported
H6HabitUse behavior0.4770.2992.960.004Supported
H7Behavioral intentionUse behavior 0.2222.1410.035Supported
H8Performance expectancyUse behavior −0.149−1.9250.057Supported
H9Hedonic motivationUse behavior 0.1161.3330.186Not supported
H10Social influenceUse behavior 0.2342.7050.008Supported
H11Effort expectancyUse behavior 0.0820.9380.351Not supported
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zou, H.; Chan, K.I.; Pang, P.C.-I.; Manditereza, B.; Shih, Y.-H. Factors Influencing the Reported Intention of Higher Vocational Computer Science Students in China to Use AI After Ethical Training: A Study in Guangdong Province. Educ. Sci. 2025, 15, 1431. https://doi.org/10.3390/educsci15111431

AMA Style

Zou H, Chan KI, Pang PC-I, Manditereza B, Shih Y-H. Factors Influencing the Reported Intention of Higher Vocational Computer Science Students in China to Use AI After Ethical Training: A Study in Guangdong Province. Education Sciences. 2025; 15(11):1431. https://doi.org/10.3390/educsci15111431

Chicago/Turabian Style

Zou, Huiwen, Ka Ian Chan, Patrick Cheong-Iao Pang, Blandina Manditereza, and Yi-Huang Shih. 2025. "Factors Influencing the Reported Intention of Higher Vocational Computer Science Students in China to Use AI After Ethical Training: A Study in Guangdong Province" Education Sciences 15, no. 11: 1431. https://doi.org/10.3390/educsci15111431

APA Style

Zou, H., Chan, K. I., Pang, P. C.-I., Manditereza, B., & Shih, Y.-H. (2025). Factors Influencing the Reported Intention of Higher Vocational Computer Science Students in China to Use AI After Ethical Training: A Study in Guangdong Province. Education Sciences, 15(11), 1431. https://doi.org/10.3390/educsci15111431

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop