Next Article in Journal
Digital and Digitized Interventions for Teachers’ Professional Well-Being: A Systematic Review of Work Engagement and Burnout Using the Job Demands–Resources Theory
Previous Article in Journal
Linking International Faculty Integration to International Academic Impact: The Moderating Role of Institutional Digitization Level in Chinese Universities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A TAM-Based Analysis of Hong Kong Undergraduate Students’ Attitudes Toward Generative AI in Higher Education and Employment

Institute for Research in Open and Innovative Education, Hong Kong Metropolitan University, Kowloon, Hong Kong, China
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(7), 798; https://doi.org/10.3390/educsci15070798
Submission received: 16 April 2025 / Revised: 18 June 2025 / Accepted: 18 June 2025 / Published: 20 June 2025
(This article belongs to the Topic AI Trends in Teacher and Student Training)

Abstract

This study explores undergraduate students’ attitudes towards generative AI tools in higher education and their perspectives on the future of jobs. It aims to understand the decision-making processes behind adopting these emerging technologies. A multidimensional model based on the technology acceptance model was developed to assess various factors, including perceived ease of use, perceived benefits, perceived concerns, knowledge of AI, and students’ perceptions of generative AI’s impact on the future of jobs. Data were collected through a survey distributed to 93 undergraduate students at a university in Hong Kong. The findings of multiple regression analyses revealed that these factors collectively explained 23% of the variance in frequency of use [(F(4, 78) = 5.89, p < 0.001), R2 = 0.23]. Perceived benefits played the most significant role in determining frequency of use of generative AI tools. While students expressed mixed attitudes toward the role of AI in the future of jobs, those who voiced concerns about AI in education were more likely to view generative AI as a potential threat to job availability. The results provide insights for educators and policymakers to promote the effective use of generative AI tools in academic settings to help mitigate risks associated with overreliance, biases, and the underdevelopment of essential soft skills, including critical thinking, creativity, and communication. By addressing these challenges, higher education institutions can better prepare students for a rapidly evolving, AI-driven workforce.

1. Introduction

1.1. Generative Artificial Intelligence in Higher Education

The Organisation for Economic Co-operation and Development (OECD) defines an artificial intelligence (AI) system as: “…a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy (OECD, 2019, pp. 23–24)”. Technologies categorised as AI are designed to mimic or simulate human intelligence. AI is already pervasive in innovative technologies such as fitness trackers, algorithm-driven social media platforms, and smart home security devices and is increasingly being adopted in the education sector (Li & Wong, 2023).
The arrival of ChatGPT in late 2022 introduced the world to generative AI, a new form of AI technology capable of recognising and responding to its environment interpreting information, and generating new content (V. Chan & Tang, 2024; C. K. Y. Chan & Tsi, 2024). The high efficiency and adaptability of generative AI have led to its widespread adoption across industries.
However, the use of generative AI remains a divisive topic in higher education. Proponents of integrating generative AI technologies into academia emphasise their ability to streamline feedback and personalise learning experiences. These technologies provide ubiquitous access to resources, enabling students to access a variety of learning materials Students benefit from data-driven feedback and individualised learning experiences tailored to their needs. Tools such as educational chatbots, automated online learning systems, and assistive robot mentors create inclusive and accessible environments for diverse student needs (Zhang & Zhang, 2024).
On the other hand, there have been concerns about students developing an over-reliance on these tools, which may negatively impact their development of cognitive and logical skills, communication, collaboration, and overall academic capacity (Yusuf et al., 2024; Barrot, 2023). This reliance may hinder the development of essential research, writing, and communication skills. Additionally, generative AI tools lack the ability to demonstrate empathy and emotional intelligence, which are widely regarded as cornerstones of effective teaching and learning experiences (C. K. Y. Chan & Tsi, 2024). This limitation may result in the production of biased or incorrect information, and coupled with the underdevelopment of critical thinking skills, students may be at greater risk of accepting and reproducing inaccuracies.
How students interact with their environment and interpret the benefits and drawbacks of available resources, alongside the overall learning context, impacts the successful integration and appropriate use of new technologies. Therefore, identifying student attitudes and opinions is key to collaborating with them in the integration of new teaching and learning tools. C. K. Y. Chan and Hu (2023) examined student perceptions of generative AI across six Hong Kong universities. Yusuf et al. (2024) argued that the failure of technological adoption often stems from neglecting user opinions and behaviours, while overemphasising technical capabilities. Although emerging studies have sought to understand the variables influencing how new users interact with and incorporate new technologies, there is a lack of research examining the interplay between affective, knowledge-based, and behavioural variables (Bewersdorff et al., 2025). Conversely, overenthusiasm for new technology may lead to ignorance of its limitations and insufficient caution in its use (Tully et al., 2025). By understanding students’ concerns and knowledge regarding the limitations of AI, policymakers can determine how protectionist and conservative AI usage guidelines should be to ensure safe and effective use.
This study examines the impact of positive and negative perceptions on the usage rates of generative AI tools in higher education, with a focus on student attitudes toward the future of jobs. While the existing literature primarily focuses on generative AI’s role as an educational tool (Acosta-Enriquez et al., 2024; C. K. Y. Chan & Hu, 2023; Noh et al., 2021; Slamet, 2024; Tala et al., 2024), there is a gap in the research exploring students’ views on AI’s impact on the future of jobs. Specifically, there is a lack of studies examining discrepancies in students’ opinions on generative AI as both an academic support tool and a potential competitor in the job market.
The intended impact of this research is to provide insight into students’ views on generative AI technologies and how these attitudes influence their adoption and use of such tools. Understanding user perceptions and attitudes is a key indicator of successful integration and impact (Sumakul et al., 2022). By collecting data on students’ attitudes and their thoughts on the long-term effects of AI on employability, educators can collaborate with students to design effective policies and programs. Policymakers and universities can also benefit from considering these perspectives when developing AI-related policies for the student body.

1.2. Generative Artificial Intelligence and the Future of Jobs

The introduction of new technologies is a key determinant of the future of jobs. This concept encompasses the dynamics of job creation versus displacement, wage dynamics, and shifts in the quality and availability of employment opportunities (Balliester & Elsheikhi, 2018). The phenomenon of creative destruction, driven by innovation and technological advancement, has affected sectors concentrated in manufacturing and service industries, as well as developing economies. Higher education and postgraduate qualifications have traditionally offered protection against these risks by providing skills training for jobs that emphasise creativity, critical thinking, and problem-solving.
However, the advent of computerisation and the rise of generative AI in cognitive domains have posed significant threats to jobs across all sectors, even as the availability of workers with higher education levels increases (Frey & Osborne, 2017). Since the launch of ChatGPT in late 2022, the program has attracted users from 209 out of 218 global economies (Liu & Wang, 2024). According to a report by the International Monetary Fund (IMF), 40% of global employment will be exposed to AI, with advanced economies disproportionately affected. An estimated 60% of jobs from developed nations are expected to be impacted, largely due to AI’s ability to perform cognitive task-oriented jobs (Cazzaniga et al., 2024). The opportunities and threats posed by generative AI have garnered attention across a broad range of industries, such as education (Inamdar & Kumar, 2025), journalism (Vicsek et al., 2025), pharmacy (Gustafson et al., 2025), and architecture (Elrawy & Wagdy, 2025).
Brown (2023) highlighted this debate, referring to labour scarcity theory and job scarcity theory. Labour scarcity theory suggests that demand for knowledge workers will grow due to new technologies, emphasising the role of education in enabling workers to upskill and remain competitive in an era of digital innovation. However, job scarcity theory posits that the number of “good quality employment opportunities” will decrease (p. 476). As generative AI integrates into higher education classrooms and programs, these tools also emerge as significant competitors in the job market, with the potential to replace human labour.
Much of the discourse has focused on the workplace implications of generative AI. From the perspective of employees, Wut and Chan (2025) investigated how working adults’ perceptions influence their attitudes toward the adoption of generative AI. Irish et al. (2025) identified a significant disconnect between educational institutions and industry practices in preparing students and employees for the transformations brought about by generative AI. Their findings highlight the need for systematic integration of hands-on training and industry-aligned curricula that focus on the practical and ethical use of generative AI in higher education.
However, the perspectives of students remain conspicuously absent from the literature. Existing work addressing students’ attitudes on generative AI has primarily focused on its use as an educational tool (Acosta-Enriquez et al., 2024; C. K. Y. Chan & Hu, 2023; Slamet, 2024; Tala et al., 2024). Despite its importance, there is a notable lack of research examining students’ attitudes toward the potential impact of this technology on their future careers. This study addresses this issue by uncovering students’ attitudinal discrepancies between academic and employment-related AI usage, as well as the factors contributing to them.
Addressing this research gap is critical for understanding how students perceive their employability and job prospects in an evolving job market. Such insights are essential for evaluating existing university policies and designing future initiatives to support students (Vicsek et al., 2025). Data on students’ attitudes can serve as indicators of the effectiveness of institutional efforts to enhance graduate skills and competitiveness. Furthermore, these insights can inform the development of tailored support mechanisms for students pursuing career paths more susceptible to job scarcity in the age of generative AI.

1.3. Aim of the Study

This study aimed to explore the varying attitudes of undergraduate students toward the integration of generative AI technologies into higher education and their implications for the future of jobs.
The research questions guiding this study were as follows:
  • What are the attitudes of undergraduate students toward generative AI tools in higher education?
  • How do their perceived benefits and concerns regarding the integration of generative AI into higher education affect the frequency of use?
  • How do students perceive generative AI tools in relation to the future of jobs?
  • What are the differences in perceptions of generative AI technology in education versus the workplace?
The dual focus of this study is to capture student attitudes towards generative AI technologies and inform future education policies. By gaining an in-depth understanding of the student population, educators and policymakers can design guidelines that maximise benefits while addressing concerns. For example, a 2024 study of Swedish students by Stohr et al. found that the majority of students who participated in the study had limited or no awareness of university guidelines or policies regarding AI usage.
Higher education institutions must adapt to prepare students for a future where AI tools serve as both aids to efficiency and competitors in the job market. The increasing influence of generative AI in academia may challenge the value of traditional degrees. While labour scarcity theory suggests education is the primary means to remain competitive (Brown, 2023), generative AI challenges this norm, necessitating institutional adaptation to remain relevant.
Given the novelty of generative AI technology, there is limited data on its effects on the future of jobs. Studies on student perceptions of AI in the workplace can help evaluate programs designed to prepare them for a rapidly evolving technological landscape. This research aims to contribute to the growing field of digitalisation by identifying the nuances of incorporating generative AI tools into higher education and understanding their broader implications for employability.

2. Framework and Constructs

The technology acceptance model (TAM) developed by Davis (1989) has been widely recognised as the foundational framework for understanding the adoption of new technologies. However, despite its popularity, TAM does not account for additional factors that may influence technology usage and formal integration. For studies in various contexts, TAM has been expanded by incorporating additional constructs and measures (Granić, 2023).
TAM was adopted for this study based on its suitability for addressing the constructs under investigation. Although numerous technology acceptance models have since been developed, the constructs covered by these models do not address the context of this study. For example, the subsequent version of TAM, i.e., TAM1 (Venkatesh & Davis, 1996), removed attitude from the model. TAM2 (Venkatesh & Davis, 2000) extended TAM1 by including antecedents (e.g., subjective norm, and voluntariness) of perceived usefulness. TAM3 (Venkatesh & Bala, 2008) further enlarged the constructs (e.g., computer anxiety, and perceived enjoyment) preceding perceived ease of use. Another technology acceptance model, the unified theory of acceptance and use of technology (UTAUT) (Venkatesh et al., 2003) integrated elements across eight relevant models into four constructs (performance expectancy, effort expectancy, social influence, and facilitating conditions). Its later version UTAUT2 (Venkatesh et al., 2012) included more constructs (hedonic motivations, price value, and habit) and extended the model to consumer use contexts. Among these technology acceptance models, attitude, a core construct within the current study, is explicitly addressed in the original TAM but not in the others.
As defined by Huedo-Martínez et al. (2018), attitude refers to an “evaluative judgement” regarding the integration of AI into university curricula. It plays a key role in affecting the use of a technology when the use is voluntary (Yousafzai et al., 2007). Rondan-Cataluña et al. (2015) compared various technology acceptance models and revealed that attitude did not play an essential role when users were obliged to use a technology but gained weight in contexts of voluntary technological use. Their comparison results also found that later versions of TAM do not seem to provide better explanations for technology acceptance than the original. While UTAUT2 shows a better explanatory power, it has been developed for consumer-focused technologies with constructs such as price value (Rondan-Cataluña et al., 2015). Granić (2023) highlighted TAM’s parsimony, emphasising its effectiveness with a minimal yet impactful set of constructs, and showed that TAM has been the most widely used model in research into the adoption of educational technology. Taking into consideration the relevant factors, the original TAM was used as a foundation model for this study to account for students’ attitudes toward generative AI in a context in which students’ use of AI was voluntary.
This study evaluated four key constructs: Perceived Ease of Use, Perceived Benefits, Knowledge of AI, and Perceived Concerns. Figure 1 provides a visual representation of these constructs and their interrelationships.

2.1. Technology Acceptance Model—Perceived Ease of Use and Perceived Benefits

TAM is built on two primary constructs: perceived ease of use and perceived value (referred to as perceived benefits in this study). The model posits that individuals are more likely to adopt new technology when it is perceived as easy to use and beneficial to their goals (Yao et al., 2024; Rožman et al., 2023).
Empirical studies have supported the model. For instance, students are more inclined to adopt digital tools when instructions are straightforward and usability is high. User-friendliness reduces the barriers to engagement, creating a positive correlation between perceived usefulness and improved academic performance. Factors such as increased efficiency, effectiveness, and enhanced learning experiences contribute to the value students place on these tools (Noh et al., 2021; Rožman et al., 2023). In the literature on consumer behaviour, it is similarly emphasised that individuals evaluate a product or service’s ability to achieve a specific purpose before deciding to adopt it (Arts et al., 2011).
Generative AI stands apart from earlier educational technologies due to its ubiquity and versatility. Studies have consistently shown that students who perceive new digital tools as valuable and relevant in their field are more willing to adopt them. This forms the basis for examining whether perceived ease of use and perceived benefits impact frequency of use.

2.2. Knowledge of AI

This study also examined students’ knowledge and awareness of the limitations of generative AI, measuring their AI literacy against adoption rates. Here, “knowledge of AI” is defined as literacy regarding the limitations and shortcomings of generative AI technologies. A separate construct evaluates the weight students place on concerns about the potential adverse effects of AI usage.
The relationship between knowledge of AI and frequency of use remains inconclusive. For example, C. K. Y. Chan and Hu (2023) reported a moderate positive correlation between knowledge of AI limitations and willingness to use AI tools. Conversely, Acosta-Enriquez et al. (2024) found no strong correlation between AI literacy and its adoption among higher education students. Similarly, research involving computer engineering students revealed that despite high levels of AI literacy and frequent use, students expressed significant hesitancy towards adopting generative AI (Huedo-Martínez et al., 2018).
Understanding the interplay between AI literacy and usage can inform policymakers on how to tailor guidelines. For instance, low AI literacy may necessitate educational programs, while high literacy coupled with low usage may require greater emphasis on transparency and trust.

2.3. Perceived Concerns

“Perceived concerns” refer to users’ subjective evaluations of potential adverse effects and risks associated with AI usage (Ali et al., 2021). Concerns play a critical role in shaping attitudes and behaviours, particularly with innovative technologies. For example, Ma et al. (2024) found that perceived concerns negatively influenced consumers’ behaviour in e-commerce, with higher concern leading to lower online purchase rates.
In the context of generative AI, identifying students’ concerns is crucial for understanding the long-term viability and successful integration of these tools into higher education. Incorporating “perceived concerns” into TAM allows a more nuanced understanding of how perceived risks interact with perceived benefits in shaping adoption decisions.

2.4. Perceptions of AI and the Future of Jobs

A 2012 European Union study revealed that a majority of participants agreed with the statement, “Robots steal people’s jobs” (European Commission, 2012, p. 8). A 2017 replication of the study yielded similar results, with Dekker et al. (2017) finding that unemployed individuals, those with lower education levels, and blue-collar workers exhibited higher levels of fear regarding job displacement by robots.
Automation and emerging technologies have been perceived as threats primarily to roles requiring lower levels of cognitive skills and critical thinking (Waring, 2024). The ability of generative AI to “learn” and “think critically” may have stimulated similar concerns. For example, a 2020 study found that 25.2% of college students expressed moderate to high concern about AI replacing human jobs (Jeffrey, 2020). These findings form the basis for studying how perceived concerns about generative AI in education negatively influence perceptions of its role in the workplace.

3. Materials and Methods

This study employed a quantitative methodology using a cross-sectional, non-experimental research design. This research design was chosen in order to collect data at a single point in time to capture students’ attitudes toward generative AI tools and their perceptions of AI’s impact on the future of jobs. There were no interventions or manipulations of the participating students, as the focus was on their naturally occurring attitudes and behaviours.

3.1. Instrument Development and Validation

The survey instrument was developed by adapting and combining validated scales from studies on generative AI in higher education, students’ perceptions, and the long-term impact of emerging technologies on employment (C. K. Y. Chan & Hu, 2023; Rožman et al., 2023). This approach ensured that the instrument aligned with established measures, facilitating comparability with broader datasets and enhancing the validity of the findings.
A validation process was undertaken to ensure the content and face validity of the instrument. First, an initial draft of the survey was reviewed by a panel of three subject-matter experts to evaluate the survey items for relevance and alignment with the study’s objectives. Feedback from the experts was collected to inform iterative revisions to the instrument. Next, the revised survey underwent a pilot test with 25 undergraduate students who were representative of the target population. Participants in the pilot study provided feedback on the clarity and length of the questions. Based on their input, further adjustments were made to improve the wording of certain items.
The final instrument consisted of 23 items designed to assess AI literacy (knowledge of AI), perceived ease of use, perceived benefits, perceived concerns, and views on AI’s impact on the job market. Demographic and screening questions were included to ensure the representativeness of the sample, along with quality control prompts to identify and exclude invalid responses.
The survey was initially developed in English and then translated into Chinese to accommodate the linguistic preferences of the target population. The translated instrument was reviewed by two bilingual experts to ensure its accuracy and cultural relevance.
Participants responded to items using a 5-point Likert scale, ranging from strongly disagree (1) to strongly agree (5). For the construct measuring frequency of use, an inverted Likert scale was employed, ranging from daily use (1) to never (5).

3.2. Data Collection and Sampling

This formal study was conducted at a university in Hong Kong, targeting undergraduate students as the population of interest. Participants were recruited through convenience sampling, with the survey distributed via a web-based platform accessible from participants’ personal devices.
A total of 93 responses were collected, of which 83 valid responses were retained after filtering out incomplete or irregular submissions. This sample served the study’s exploratory nature as a preliminary investigation into the research questions.

3.3. Data Analysis

Data analysis was conducted using IBM SPSS Statistics 26. Multiple regression analyses were performed to identify the strength and direction of relationships between the constructs.
To assess the reliability of the survey instrument, Cronbach’s alpha coefficients were calculated for each construct. As shown in Table 1, most constructs exceeded the general recommended threshold of 0.70 for internal consistency.

4. Results

4.1. Demographic Information

Table 2 provides a summary of the demographic characteristics of the study participants. The participants were recruited from five undergraduate programs: Translation, Computing and Interactive Entertainment, New Music and Interactive Entertainment, Data Science and AI, and Computer Science. The sample consisted of 43 females (52%) and 40 males (48%). Analysis revealed that 17% of the students were enrolled in STEM programs, while the remaining 83% were in non-STEM programs. The majority of participants were aged between 20–24 years (48%) or 19 years and younger (47%), with only 5% aged 25 or older.
Over 90% of participants reported prior experience with generative AI tools. Specifically, 69% (n = 57) indicated they had used generative AI tools at university, 5% (n = 4) had only used them in secondary school, and 18% (22%) reported using them in both contexts. A small minority (5%, n = 4) reported never using such tools. Regarding frequency of use, 8.4% (n = 7) reported daily usage, while the majority (43.4%, n = 36) used these tools weekly. Monthly usage was reported by 25.3% (n = 21), and 22% (n = 16) reported quarterly or no usage.
Participants identified various purposes for using generative AI tools, including finding information (n = 53), generating ideas (n = 53), translating texts (n = 52), summarising texts (n = 34), creating images/audios/videos (n = 32), drafting texts (e.g., emails, essays) (n = 20), analysing data (n = 15), asking for advice (n = 15), seeking recommendations (n = 10), and preparing for job applications/interviews (n = 7). The most commonly used tools were chatbot programs such as ChatGPT (n = 57) and Poe (n = 22).

4.2. Descriptive Results

Table 3 presents participants’ perceptions of the ease of use of generative AI tools. Overall, participants found generative AI tools user-friendly, with moderate to high average scores across the three statements. The statement “Generative AI tools are easy to use” received the highest mean score (3.91, SD = 0.94). Confidence slightly declined for statements about learning how to use these tools (mean = 3.87, SD = 0.96), and applying them to academic tasks (mean = 3.29, SD = 1.00). The results suggest that while students found the tools intuitive, their application in academic contexts may require additional effort.
Table 4 highlights participants’ understanding of the limitations of generative AI tools. Scores ranged from moderate to high (mean = 3.28 to 4.39), with the highest-rated concern being the potential for inaccurate outputs (mean = 4.39, SD = 0.66). Participants also acknowledged limitations such as the generation of irrelevant outputs (mean = 3.94, SD = 0.85) and overreliance on statistical data, which may limit usefulness in novel contexts (mean = 3.80, SD = 0.84). However, there was less certainty regarding issues such as bias in outputs (mean = 3.34, SD = 0.93) and the tools’ lack of emotional intelligence (mean = 3.28, SD = 1.06).
Table 5 outlines participants’ attitudes toward the benefits of generative AI tools. Responses were generally positive, with the highest-rated statement being “Generative AI tools provide students with additional learning support” (mean = 4.10, SD = 0.67). Other highly rated benefits included the tools’ usefulness for study purposes (mean = 3.95, SD = 0.70) and their ability to complement students’ learning needs (mean = 3.80, SD = 0.74). However, the statement “Generate AI tools clarify many points that teachers cannot cover in their explanation” received a lower score (mean = 3.41, SD = 0.91), suggesting that students viewed these tools as supplementary aids rather than replacements for traditional teaching methods.
Table 6 illustrates participants’ perceptions of potential concerns associated with generative AI usage. Overall, concerns were rated relatively low, with an average score of 2.51 across all statements. Participants did not perceive significant risks to their development of transferable skills (mean = 2.51, SD = 1.07), their ability to interact with peers (mean = 2.41, SD = 1.02), or their communication with teachers (mean = 2.41, SD = 1.10). Similarly, concerns about over-reliance on AI tools (mean = 2.51, SD = 1.09) and the diminishing role of teachers (mean = 2.43, SD = 1.11) were rated low. These findings reinforce the perception that generative AI tools are viewed as complementary aids rather than disruptive forces in traditional education.
Table 7 highlights participants’ perceptions regarding the impact of generative AI on the future of jobs. The two statements “Generative AI technology steals people’s jobs” (mean = 3.10, SD = 1.11) and “Generative AI technology will make it harder for me to find a job after graduation” (mean = 3.25, SD = 1.03) indicate a moderate level of agreement among respondents. The results suggest that students are somewhat concerned about the potential challenges generative AI may pose to their post-graduation employment prospects.

4.3. Regression Analysis

To examine the relationships between student attitudes toward generative AI tools and their reported frequency of use, multiple regression analyses were conducted. Diagnostic tests were performed to ensure adherence to regression assumptions. Variance inflation factor (VIF) values ranged from 1.05 to 1.66, indicating no issues with multicollinearity. The Durbin–Watson statistic was 1.63, suggesting residual independence. Visual inspection of residual plots confirmed linearity, homoscedasticity, and normality.
A multiple regression analysis was conducted to examine the relationship between the subscales and students’ reported frequency of use of generative AI tools. The model was statistically significant, (F(4, 78) = 5.89, p < 0.001), R2 = 0.23, indicating that the predictors collectively explained 23% of the variance in frequency of use. The regression coefficients are presented in Table 8.
To explore the multi-faceted nature of student attitudes towards generative AI in education and the future of work, partial regression analyses were conducted to examine the relationships between the subscales.
  • Relationship between Perceived Ease of Use, Perceived Benefits, and Frequency of Use
A regression analysis was conducted to test the relationship between perceived ease of use, perceived benefits, and frequency of use. The resulting model was statistically significant, F(2, 80) = 9.83, p < 0.001, with a coefficient of determination of R = 0.44; R2 = 0.20, indicating that 20% of the variance in frequency of use was explained by the predictors. However, the results revealed mixed findings, as shown in Table 9.
The analysis revealed that perceived benefits significantly predicted frequency of use (β = −0.29, t(80) = −2.54, p= 0.01), while perceived ease of use did not meet the significance threshold (β = −0.22, t(80) = −1.95, p = 0.06).
While previous studies, such as Rožman et al. (2023) and Yao et al. (2024), have demonstrated that perceived ease of use positively influences technology adoption, the findings in this study suggest that perceived benefits play a more significant role in predicting frequency of use.
  • Relationship between Knowledge of AI and Frequency of Use
A regression analysis was conducted to test the relationship between knowledge of AI and frequency of use. The resulting model was not statistically significant, F(1, 81) = 1.26, p = 0.27, with a coefficient of determination of R2 = 0.02, indicating that only 2% of the variance in frequency of use was explained by the predictor. The regression coefficients are presented in Table 10.
The analysis revealed that knowledge of AI did not significantly predict frequency of use (β = −0.12, t(80) = −1.12, p = 0.27). Participants demonstrated a medium level of literacy regarding AI limitations, with a mean score of 3.61 (SD = 0.43). However, the lack of statistical significance suggests that knowledge of AI does not directly influence usage frequency.
The literature on the relationship between AI literacy levels and usage frequency is mixed. Acosta-Enriquez et al. (2024) found no significant influence between these variables among Generation Z university students, while C. K. Y. Chan and Hu (2023) reported a positive correlation between AI literacy and rates of its adoption among Hong Kong university students. However, C. K. Y. Chan and Hu (2023) did not specifically examine knowledge of AI limitations. The findings of the current study aligned more closely with Acosta-Enriquez et al. (2024), as no significant relationship was observed.
  • Relationship between Perceived Ease of Use, Perceived Benefits, Perceived Concerns, and Frequency of Use
A multiple regression analysis was conducted to predict frequency of use based on perceived ease of use, perceived benefits, and perceived concerns. The model was statistically significant, (F(3, 79) = 7.95, p < 0.001), with R2 = 0.23, indicating that the predictors collectively explained 23% of the variance in frequency of use.
Table 11 presents the regression coefficients, which indicate that perceived ease of use (β = −0.25, p = 0.03) and perceived benefits (β = −0.24, p = 0.04) were significant predictors of frequency of use. Interestingly, perceived concerns did not significantly predict frequency of use (β = 0.19, p = 0.06), despite its marginal p-value. Given the borderline significance, perceived concerns cannot be confidently interpreted as influential in this model.
The impact of perceived risk and trust on the adoption of technology has been extensively studied. For example, Yao et al. (2024) found that 43.2% of the variance in trust in mobility technologies could be explained by perceived risk and belief. Similarly, Acosta-Enriquez et al. (2024) noted that while students perceived language learning models as valuable, they also expressed concerns about potential shortcomings, such as limited accuracy, misinformation, and over-dependency. In a study of U.S. college students’ adoption of ChatGPT, Baek et al. (2024) highlighted themes of pragmatism, optimism, and collaboration among those who expressed “no concerns” about AI.
Perceived concerns did not emerge as a significant predictor of frequency of use. Instead, perceived ease of use and perceived benefits were more influential. These findings align with prior research emphasising the importance of perceived usefulness in technology adoption (Noh et al., 2021). While the original TAM framework prioritises positive perceptions (e.g., usefulness and ease of use), this study highlights the limited role of negative perceptions (e.g., concerns) in influencing the adoption of generative AI.
Relationship between Perceived Concerns about generative AI in education and perceptions of its role in the workplace
Two regression analyses were conducted to examine the relationship between perceived concerns and two dependent variables: (1) the belief that generative AI will steal jobs ([R22]) (Table 12) and (2) the belief that generative AI will negatively impact individual employment prospects ([R23]) (Table 13).
The first regression model predicting [R22] was statistically significant, (F(1, 81) = 5.97, p = 0.02), with R2 = 0.07, indicating that perceived concerns explained 7% of the variance in the belief that generative AI will steal jobs (Table 12). The regression coefficient for perceived concerns was significant (β = 0.26, p = 0.02), suggesting that students who expressed greater concerns about generative AI were more likely to believe that these technologies would lead to job displacement.
The second regression model predicting [R23] was not statistically significant, (F(1, 81) = 1.17, p = 0.28), with R2 = 0.01, indicating that perceived concerns explained only 1% of the variance in the belief that generative AI will negatively impact individual employment prospects (Table 13). The regression coefficient for perceived concerns was not significant (β = 0.12, p = 0.28), suggesting that concerns about generative AI do not strongly influence students’ views of their employment prospects.
While perceived concerns significantly predicted the belief that generative AI will steal jobs ([R22]), they did not significantly predict the belief that generative AI will negatively impact individual employment prospects ([R23]). This discrepancy aligns with prior studies highlighting nuanced perspectives on AI’s impact on the job market. For example, Vicsek et al. (2022) found that students often differentiate between AI’s collective impact on the labour market and their personal employment prospects. Similarly, Dekker et al. (2017) noted that fears of automation were more pronounced among individuals in manual labour roles or with lower educational credentials. While concerns about generative AI influence beliefs about job displacement at a societal level, they appear less relevant to students’ personal employment outlooks.

5. Discussion

This study reveals the nuanced and multifaceted attitudes undergraduate students hold toward generative AI tools. Students generally viewed generative AI as a productive aid that enhances teaching and learning, while expressing concerns about its limitations, such as inaccuracies and biases in outputs. These results highlight the intricate balance between the benefits and drawbacks of this emerging educational technology and how these perceptions influence students’ adoption and usage behaviours.
The results challenge the traditional TAM by demonstrating that only perceived benefits have a statistically significant influence on the usage of generative AI. The role of perceived ease of use in determining usage adoption appears to be diminishing. This deviation from TAM suggests that the intuitive and user-friendly design of generative AI tools may render ease of use less critical in adoption decisions. This aligns with prior research suggesting that ease of use becomes less critical as technologies become more accessible and integrated into everyday platforms (Ahn, 2024). The results revealed that students predominantly used generative AI tools embedded in existing platforms, such as chatbots, which require minimal technical expertise or behavioural adjustment (V. Chan & Tang, 2024). These findings suggest that TAM may need to be adapted to account for the evolving nature of user interactions with highly intuitive technologies.
The lack of significant influence from perceived concerns and knowledge of AI raises questions about students’ awareness of the risks associated with generative AI. While students acknowledged limitations such as inaccuracies and biases, these concerns did not significantly deter their usage. This disconnect between students’ cognitive recognition of risks and behavioural responses may reflect a broader trend among younger users, who tend to prioritise the immediate benefits of technology over its potential drawbacks (Nikolenko & Astapenko, 2023). Additionally, the relatively low levels of Perceived Concerns suggest that students may view generative AI as a low-risk tool, particularly in contexts where financial and technical barriers to entry are minimal. These findings highlight the need for education policies that emphasise critical engagement with AI tools, equipping students to navigate the limitations of AI responsibly.
The integration of generative AI into academic programming has the potential to enhance the efficiency and personalisation of the teaching and learning experience (Li & Wong, 2024). However, as these tools become further embedded in daily life, there is a risk that students may develop overreliance on AI, potentially undermining the development of critical soft skills such as creativity, communication, and problem-solving. This concern is particularly relevant for future cohorts who grow up with generative AI integrated into their formative learning years (C. K. Y. Chan & Colloton, 2024). Conducting longitudinal studies with younger students who have prolonged exposure to AI tools could provide further insights into how early adoption shapes attitudes and behaviours over time.
Students in this study expressed mixed but generally optimistic views on the impact of generative AI on the future of jobs. This optimism may stem from the traditional belief that a university education serves as a safeguard against unemployment (Irish et al., 2025). However, the survey did not collect data on participants’ internship or work experience, which could influence their perspectives on employability. Future research should explore how practical exposure to the workforce shapes students’ attitudes toward AI’s impact on employment opportunities.
A comparison between students’ concerns about generative AI in higher education and its impact on the future of jobs revealed a positive relationship. Those who expressed concerns about AI limiting opportunities for peer interaction or the development of transferable skills were also more likely to predict that AI would replace jobs in the future. This finding underscores the importance of preparing students for a technology-driven workforce by fostering higher-order thinking skills such as critical analysis, socio-ethical reasoning, and interpersonal communication (Brown, 2023; Wut & Chan, 2025). Universities should evaluate employability strategies to ensure that students are equipped to leverage AI as a tool while mitigating its potential risks.
To maximise the benefits of generative AI in education, policymakers and educators should focus on providing clear guidance on how these tools can be effectively used for specific tasks, rather than relying on students to self-integrate and self-regulate (Li & Wong, 2017, 2019). The findings suggest that the potential risks of AI usage should be addressed. This includes introducing frameworks that emphasise transparency, safety, and the responsible use of AI tools (Salari et al., 2025). Such measures can help students navigate the limitations of AI while avoiding pitfalls like overreliance or misuse.
Finally, the dual role of generative AI as both a tool and a competitor underscores the importance of universities in preparing students for a future shaped by technological advancements. Ensuring that students acquire both the soft skills and technical expertise needed to thrive in a technology-driven workplace is essential for fostering long-term employability and adaptability (Jin et al., 2025).
The limitations of this study should also be noted when interpreting its findings. The study employed convenience sampling which limits the generalisability of the results. While the sample provides insights into the attitudes of undergraduate students at a university in Hong Kong, the non-random nature of the sampling method may have introduced biases that limit the applicability of the results to broader populations. For instance, given that Hong Kong is a highly developed region, the sample may overrepresent students who are technologically inclined or who have great access to generative AI tools. Additionally, cultural and institutional factors specific to the study context, such as the positive and supportive institutional culture for generative AI, may shape students’ attitudes and behaviours in ways that differ from those in other regions or educational systems. Future research should consider employing random sampling or stratified sampling methods to enhance the representativeness of the sample and improve the generalisability of findings across diverse populations and regions.
Also, the reported R2 values in this study were relatively low, with an overall value of 0.23, indicating that the proposed model explained only a small portion of the variance in the outcome variable (usage of generative AI tools). This limited explanatory power suggests that additional factors not captured by the model may play a significant role in influencing students’ adoption and usage of generative AI. Theoretical considerations may point to the need to expand the model to include constructs such as social influence, institutional support, and perceived risks, which have been shown to impact adoption of technology in other contexts (Venkatesh et al., 2003). Measurement-related factors may also contribute to the relatively low R2 values, such as the use of self-reported data which is subject to biases like social desirability or recall inaccuracies. Future studies could incorporate objective measures of AI usage, such as usage logs or behavioural tracking, to complement self-reported data and provide a more comprehensive understanding of the factors driving adoption.

6. Conclusions

This study examined undergraduate students’ attitudes toward generative AI tools and their perceptions of the future of jobs. By adapting TAM to include knowledge of AI, perceived concerns, and questions about the future of jobs, this research provides a more comprehensive understanding of student perspectives. Generative AI technologies hold immense potential to enhance efficiency, personalisation, and accessibility in education. However, understanding user perspectives is crucial for predicting adoption, integration, and usage patterns (Sumakul et al., 2022). Additionally, identifying student perceptions can help educators address potential risks. For instance, if students perceive AI outputs as mostly accurate, educators can implement programs to teach fact-checking and critical evaluation skills.
As technology advances, task automation is expected to rise (Bonney et al., 2024). Universities should prepare students for this shift by identifying and emphasising irreplaceable core skills, such as analytical thinking, creativity, resilience, and self-awareness. These efforts will ensure that graduates entering the Industry 4.0 workforce view generative AI as a tool rather than a competitor (Waring, 2024).
This study has several limitations that should be addressed in future research. First, the sample population was small, consisting of only 93 undergraduate students from a single institution, with limited representation across programs and age groups. This restricts the generalisability of the findings. Second, the reliance on self-reported survey data introduced the possibility of bias or inaccuracies. Third, the infancy of generative AI means there are no longitudinal data to compare these findings with or to measure changes in perception over time. This study also focused exclusively on students’ attitudes, excluding industry perspectives on whether recent graduates are adequately prepared for the AI-driven workforce.
Given the small sample size, the study’s reliance on descriptive statistics and regression models with limited explanatory power restricts its ability to draw definitive conclusions. Future research should employ more robust analytical methods, such as structural equation modelling or partial least squares, to better understand the relationships among variables and explore potential mediation or moderation effects. In-depth comparative analyses should also be conducted to investigate differences in usage patterns across demographic groups, such as age or academic majors.
Future research should address these limitations by expanding the sample population to include students from diverse programs and age groups, thereby enabling more robust analysis and yielding more nuanced findings. Qualitative methods, such as interviews or focus groups, should be employed to gain deeper insights into how the adoption of generative AI varies among different student populations. Longitudinal studies could also be conducted to track changes in student perceptions of generative AI over time. Additionally, bridging the gap between education and the workforce by surveying industry professionals could help determine whether universities are effectively preparing students for the challenges and opportunities of an AI-informed future.

Author Contributions

Conceptualization, K.C.L., G.H.L.C. and B.T.M.W.; Methodology, K.C.L., G.H.L.C. and B.T.M.W.; Software, K.C.L., G.H.L.C. and M.M.F.W.; Validation, K.C.L. and B.T.M.W.; Formal analysis, K.C.L. and G.H.L.C.; Investigation, K.C.L. and G.H.L.C.; Resources, K.C.L., G.H.L.C. and M.M.F.W.; Data curation, K.C.L. and G.H.L.C.; Writing—original draft, K.C.L. and G.H.L.C.; Writing—review & editing, K.C.L., G.H.L.C., B.T.M.W. and M.M.F.W.; Visualization, K.C.L. and G.H.L.C.; Supervision, K.C.L., B.T.M.W. and M.M.F.W.; Project administration, K.C.L., B.T.M.W. and M.M.F.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved by the Research Ethics Committee of Hong Kong Metropolitan University (Reference No. HE-SF2024/48; date of approval: 18 October 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Acosta-Enriquez, B. G., Arbulú Ballesteros, M. A., Arbulu Perez Vargas, C. G., Orellana Ulloa, M. N., Gutiérrez Ulloa, C. R., Pizarro Romero, J. M., Gutiérrez Jaramillo, N. D., Cuenca Orellana, H. U., Ayala Anzoátegui, D. X., & López Roca, C. (2024). Knowledge, attitudes, and perceived ethics regarding the use of CHATGPT among Generation Z university students. International Journal for Educational Integrity, 20(1), 10. [Google Scholar] [CrossRef]
  2. Ahn, H. Y. (2024). AI-powered e-learning for lifelong learners: Impact on performance and knowledge application. Sustainability, 16(20), 9066. [Google Scholar] [CrossRef]
  3. Ali, M., Raza, S. A., Khamis, B., Puah, C. H., & Amin, H. (2021). How perceived risk, benefit and trust determine user fintech adoption: A new dimension for islamic finance. Foresight, 23(4), 403–420. [Google Scholar] [CrossRef]
  4. Arts, J. W. C., Frambach, R. T., & Bijmolt, T. H. A. (2011). Generalizations on consumer innovation adoption: A meta-analysis on drivers of intention and behavior. International Journal of Research in Marketing, 28(2), 134–144. [Google Scholar] [CrossRef]
  5. Baek, C., Tate, T., & Warschauer, M. (2024). “CHATGPT seems too good to be true”: College students’ use and perceptions of Generative AI. Computers and Education: Artificial Intelligence, 7, 100294. [Google Scholar] [CrossRef]
  6. Balliester, T., & Elsheikhi, A. (2018, April). The future of work: A literature review—ILO. International Labour Organization. Available online: https://www.ilo.org/wcmsp5/groups/public/---dgreports/---inst/documents/publication/wcms_625866.pdf (accessed on 15 March 2025).
  7. Barrot, J. S. (2023). Using CHATGPT for second language writing: Pitfalls and potentials. Assessing Writing, 57, 100745. [Google Scholar] [CrossRef]
  8. Bewersdorff, A., Hornberger, M., Nerdel, C., & Schiff, D. S. (2025). AI advocates and cautious critics: How ai attitudes, AI interest, use of AI, and AI literacy build university students’ AI self-efficacy. Computers and Education: Artificial Intelligence, 8, 100340. [Google Scholar] [CrossRef]
  9. Bonney, K., Breaux, C., Buffington, C., Dinlersoz, E., Foster, L., Goldschlag, N., Haltiwanger, J., Kroff, Z., & Savage, K. (2024). The impact of AI on the workforce: Tasks versus jobs? Economics Letters, 244, 111971. [Google Scholar] [CrossRef]
  10. Brown, R. (2023). The AI generation: How universities can prepare students for the changing world. Demos. Available online: https://demos.co.uk/research/the-ai-generation-how-universities-can-prepare-students-for-the-changing-world/ (accessed on 15 March 2025).
  11. Cazzaniga, M., Tavares, M. M., Rockall, E. J., Pizzinelli, C., Panton, A. J., Giovanni, M., Li, L., & Jaumotte, F. (2024). Gen-AI: Artificial intelligence and the future of work. Staff Discussion Notes, 2024(001), 1. [Google Scholar] [CrossRef]
  12. Chan, C. K. Y., & Colloton, T. (2024). Generative AI in higher education: The ChatGPT effect. Routledge. [Google Scholar]
  13. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on Generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  14. Chan, C. K. Y., & Tsi, L. H. Y. (2024). Will generative AI replace teachers in higher education? A study of teacher and student perceptions. Studies in Educational Evaluation, 83, 101395. [Google Scholar] [CrossRef]
  15. Chan, V., & Tang, W. K. W. (2024). GPT for translation: A systematic literature review. SN Computer Science, 5(8), 986. [Google Scholar] [CrossRef]
  16. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of Information Technology. MIS Quarterly, 13(3), 319. [Google Scholar] [CrossRef]
  17. Dekker, F., Salomons, A., & van der Waal, J. (2017). Fear of robots at work: The role of economic self-interest. Socio-Economic Review, 15(3), 539–562. [Google Scholar] [CrossRef]
  18. Elrawy, S., & Wagdy, B. (2025). Perceptions of generative AI in the architectural profession in Egypt: Opportunities, threats, concerns for the future, and steps to improve. AI and Society. [Google Scholar] [CrossRef]
  19. European Commission. (2012). Public attitudes towards robots. Eurobarometer. Available online: https://europa.eu/eurobarometer/surveys/detail/1044 (accessed on 15 March 2025).
  20. Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. [Google Scholar] [CrossRef]
  21. Granić, A. (2023). Technology acceptance and adoption in education. In O. Zawacki-Richter, & I. Jung (Eds.), Handbook of open, distance and digital education. Springer. [Google Scholar] [CrossRef]
  22. Gustafson, K. A., Rowe, C., Gavaza, P., Bernknopf, A., Nogid, A., Hoffman, A., Jones, E., Showman, L., Miller, V., Abdel Aziz, M. H., Brand-Eubanks, D., Do, D. P., Berman, S., Chu, A., Dave, V., Devraj, R., Hintze, T. D., Munir, F., Mohamed, I., … Southwood, R. (2025). Pharmacists’ perceptions of artificial intelligence: A national survey. Journal of the American Pharmacists Association, 65(1), 102306. [Google Scholar] [CrossRef] [PubMed]
  23. Huedo-Martínez, S., Molina-Carmona, R., & Llorens-Largo, F. (2018). Study on the attitude of young people towards technology. In P. Zaphiris, & A. Ioannou (Eds.), Learning and collaboration technologies. Learning and teaching (Vol. 10925). Springer. [Google Scholar] [CrossRef]
  24. Inamdar, S., & Kumar, S. V. S. (2025). Generative AI in education industry: Job displacement, job opportunities, and ethical considerations. In S. K. Oruganti, D. A. Karras, S. Thakur, J. K. Chaithanya, S. Metta, & A. Lathigara (Eds.), Digital transformation and sustainability of business (Chapter 163). CRC Press. [Google Scholar]
  25. Irish, A. L., Gazica, M. W., & Becerra, V. (2025). A qualitative descriptive analysis on generative artificial intelligence: Bridging the gap in pedagogy to prepare students for the workplace. Discover Education, 4, 48. [Google Scholar] [CrossRef]
  26. Jeffrey, T. (2020). Understanding college student perceptions of artificial intelligence. Systemics, Cybernetics and Informatics, 18(2), 8–13. [Google Scholar]
  27. Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2025). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Computers and Education: Artificial Intelligence, 8, 100348. [Google Scholar] [CrossRef]
  28. Li, K. C., & Wong, B. T. M. (2017). Ways to enhance metacognition through the factors of learning processes, achievement goals and self-efficacy. International Journal of Innovation and Learning, 21(4), 435–448. [Google Scholar] [CrossRef]
  29. Li, K. C., & Wong, B. T. M. (2019). Enhancing learners’ metacognition for smart learning: Effects of deep and surface learning, disorganisation, achievement goals and self-efficacy. International Journal of Smart Technology and Learning, 1(3), 203–217. [Google Scholar] [CrossRef]
  30. Li, K. C., & Wong, B. T. M. (2023). Artificial intelligence in personalised learning: A bibliometric analysis. Interactive Technology and Smart Education, 30(3), 422–445. [Google Scholar] [CrossRef]
  31. Li, K. C., & Wong, B. T. M. (2024). Features and trends of personalised learning: A review of journal publications from 2001 to 2018. In S. Cheung, F. Wang, L. Kwok, & P. Poulová (Eds.), Personalized learning: Approaches, methods and practices (pp. 4–17). Routledge. [Google Scholar]
  32. Liu, Y., & Wang, H. (2024). Who on earth is using generative AI? (Policy Research Working Paper 10870). World Bank. Available online: http://hdl.handle.net/10986/42071 (accessed on 15 March 2025).
  33. Ma, D., Dong, J., & Lee, C.-C. (2024). Influence of perceived risk on consumers’ intention and behavior in cross-border e-commerce transactions: A case study of the tmall global platform. International Journal of Information Management, 81, 102854. [Google Scholar] [CrossRef]
  34. Nikolenko, O., & Astapenko, E. (2023). The attitude of young people to the use of Artificial Intelligence. E3S Web of Conferences, 460, 05013. [Google Scholar] [CrossRef]
  35. Noh, N. H., Raju, R., Eri, Z. D., & Ishak, S. N. (2021). Extending technology acceptance model (TAM) to measure the students’ acceptance of using digital tools during open and distance learning (ODL). IOP Conference Series: Materials Science and Engineering, 1176(1), 012037. [Google Scholar] [CrossRef]
  36. OECD. (2019). Artificial intelligence in society. OECD Publishing. [Google Scholar] [CrossRef]
  37. Rondan-Cataluña, F. J., Arenas-Gaitán, J., & Ramírez-Correa, P. E. (2015). A comparison of the different versions of popular technology acceptance models: A non-linear perspective. Kybernetes, 44(5), 788–805. [Google Scholar] [CrossRef]
  38. Rožman, M., Tominc, P., & Vrečko, I. (2023). Building skills for the future of work: Students’ perspectives on emerging jobs in the data and AI cluster through Artificial Intelligence in Education. Environment and Social Psychology, 8(2), 1–24. [Google Scholar] [CrossRef]
  39. Salari, N., Beiromvand, M., Hosseinian-Far, A., Habibi, J., Babajani, F., & Mohammadi, M. (2025). Impacts of generative artificial intelligence on the future of labor market: A systematic review. Computers in Human Behavior Reports, 18, 100652. [Google Scholar] [CrossRef]
  40. Slamet, J. (2024). Potential of CHATGPT as a Digital Language Learning assistant: EFL teachers’ and students’ perceptions. Discover Artificial Intelligence, 4(1), 46. [Google Scholar] [CrossRef]
  41. Sumakul, D. T., Hamied, F. A., & Sukyadi, D. (2022, September 9–11). Students’ perceptions of the use of AI in a writing class. Advances in social science, education and humanities research. 67th TEFLIN International Virtual Conference & the 9th ICOELT 2021 (TEFLIN ICOELT 2021), Padang, Indonesia. [Google Scholar] [CrossRef]
  42. Tala, M. L., Muller, C. N., Nastase, I. A., State, O., & Gheorghe, G. (2024). Exploring university students’ perceptions of generative artificial intelligence in education. Amfiteatru Economic, 26(65), 71–84. [Google Scholar] [CrossRef]
  43. Tully, S. M., Longoni, C., & Appel, G. (2025). Lower artificial intelligence literacy predicts greater AI receptivity. Journal of Marketing. [Google Scholar] [CrossRef]
  44. Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. [Google Scholar] [CrossRef]
  45. Venkatesh, V., & Davis, F. D. (1996). A model of the antecedents of perceived ease of use: Development and test. Decision Sciences, 27(3), 451–481. [Google Scholar] [CrossRef]
  46. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. [Google Scholar] [CrossRef]
  47. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  48. Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. [Google Scholar] [CrossRef]
  49. Vicsek, L., Bokor, T., & Pataki, G. (2022). Younger generations’ expectations regarding artificial intelligence in the job market: Mapping accounts about the future relationship of automation and work. Journal of Sociology, 60(1), 21–38. [Google Scholar] [CrossRef]
  50. Vicsek, L., Pinter, R., & Bauer, Z. (2025). Shifting job expectations in the era of Generative AI hype—Perspectives of journalists and copywriters. International Journal of Sociology and Social Policy, 45(1/2), 1–16. [Google Scholar] [CrossRef]
  51. Waring, P. (2024). Artificial Intelligence and graduate employability: What should we teach generation AI? Journal of Applied Learning & Teaching, 7(1), 22–25. [Google Scholar] [CrossRef]
  52. Wut, T. M., & Chan, E. A.-H. (2025). Disaffordances or affordances: Perceptions of ChatGPT in the workplace. Sustainable Futures, 9, 100632. [Google Scholar] [CrossRef]
  53. Yao, E., Guo, D., Liu, S., & Zhang, J. (2024). The role of technology belief, perceived risk and initial trust in users’ acceptance of Urban Air Mobility: An empirical case in China. Multimodal Transportation, 3(4), 100169. [Google Scholar] [CrossRef]
  54. Yousafzai, S. Y., Foxall, G. R., & Pallister, J. G. (2007). Technology acceptance: A meta-analysis of the TAM: Part 1. Journal of Modelling in Management, 2(3), 251–280. [Google Scholar] [CrossRef]
  55. Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21(1), 21. [Google Scholar] [CrossRef]
  56. Zhang, J., & Zhang, Z. (2024). AI in teacher education: Unlocking new dimensions in teaching support, inclusive learning, and digital literacy. Journal of Computer Assisted Learning, 40(4), 1871–1885. [Google Scholar] [CrossRef]
Figure 1. Framework summarising attitudes towards AI.
Figure 1. Framework summarising attitudes towards AI.
Education 15 00798 g001
Table 1. Cronbach’s Alpha.
Table 1. Cronbach’s Alpha.
Construct (N of Items)Cronbach’s AlphaStandardised Alpha
Perceived Ease of Use (3)0.79 *0.80
Knowledge of AI (6)0.660.68
Perceived Concerns (6)0.79 *0.80
Perceived Benefits (6)0.81 *0.82
* Above the recommended threshold of 0.70.
Table 2. Demographic Information.
Table 2. Demographic Information.
Characteristicsn%
Gender
   Female4352%
   Male4048%
Program
   Bachelor of Arts11%
   Chinese22%
   Translation2226%
   Computing and Interactive Entertainment1528%
   New Music and Interactive Entertainment2935%
   Data Science and AI1012%
   Computer Science45%
Level of schooling where generative AI tools were used
   Secondary45%
   University5769%
   Both1822%
   Neither45%
Frequency of generative AI tool usage
   Daily78.4%
   Weekly3643.4%
   Monthly2125.2%
   Quarterly or never1622.2%
Table 3. Perceived Ease of Use.
Table 3. Perceived Ease of Use.
StatementMeanSD
[E1] Generative AI tools are easy to use.3.910.94
[E2] It is easy to learn how to use generative AI tools.3.870.96
[E3] It is easy to use generative AI tools for studying/homework.3.291.00
Table 4. Knowledge of Generative AI.
Table 4. Knowledge of Generative AI.
StatementMeanSD
[K4] Generative AI tools have limitations in handling complex tasks.4.080.77
[K5] Generative AI tools may generate inaccurate outputs.4.390.66
[K6] Generative AI tools may generate irrelevant outputs.3.940.85
[K7] Generative AI tools may exhibit biases in outputs.3.340.93
[K8] Generative AI tools may rely too heavily on statistics, limiting their usefulness in new contexts without historical data.3.800.84
[K9] Generative AI tools have limited emotional intelligence and empathy, which may lead to insensitive or inappropriate outputs.3.281.06
Table 5. Perceived Benefits.
Table 5. Perceived Benefits.
StatementMeanSD
[B10] Generative AI tools are useful for my study.3.950.70
[B11] Generative AI tools clarify many points that teachers cannot cover in their explanation.3.410.91
[B12] Generative AI tools fulfil and complement students’ learning needs.3.800.74
[B13] Generative AI tools provide students with additional learning support.4.100.67
[B14] Learning through generative AI tools makes learning less stressful than traditional methods.3.760.92
[B15] Generative AI tools change the way students acquire skills in certain subjects.3.770.89
Table 6. Perceived Concerns.
Table 6. Perceived Concerns.
StatementMeanSD
[C16] Using generative AI tools to complete assignments undermines the value of university education.2.801.13
[C17] Using generative AI tools limits my opportunities to interact with others and socialise when completing coursework.2.411.02
[C18] Using generative AI tools hinders my development of transferable skills (e.g., teamwork, problem-solving, and leadership).2.511.07
[C19] I may become over-reliant on generative AI tools.2.511.09
[C20] Teachers’ role will diminish when students use generative AI tools to learn.2.431.11
[C21] Students’ use of generative AI tools affects their ability to communicate with teachers.2.411.10
Table 7. Perceptions on Future of Jobs.
Table 7. Perceptions on Future of Jobs.
StatementMeanSD
[R22] Generative AI technology steals people’s jobs.3.101.11
[R23] Generative AI technology will make it harder for me to find a job after graduation.3.251.03
Table 8. Regression coefficients for relationship between all subscales and frequency of use of generative AI tools.
Table 8. Regression coefficients for relationship between all subscales and frequency of use of generative AI tools.
Unstandardised CoefficientsStandardised Coefficients
ModelBStd. Errorβtp95% CI
1Constant4.731.06 4.47p < 0.001[2.625, 6.839]
Ease of Use−0.310.14−0.25−2.200.03[−0.596, −0.029]
Knowledge0.010.240.010.060.95[−0.460, 0.489]
Concerns0.260.140.191.880.06[−0.015, 0.528]
Benefits−0.420.20−0.24−2.070.04[−0.830, −0.017]
Dependent Variable: Frequency of Use of generative AI tools.
Table 9. Regression coefficients for relationships between perceived ease of use, perceived benefits, and frequency of use.
Table 9. Regression coefficients for relationships between perceived ease of use, perceived benefits, and frequency of use.
Unstandardised CoefficientsStandardised Coefficients
ModelBStd. Errorβtp95% CI
1Constant5.590.69 8.17p < 0.001[4.230, 6.956]
Ease of Use−0.280.14−0.22−1.950.06[−0.556, 0.006]
Benefits−0.500.20−0.29−2.540.01[−0.899, −0.110]
Dependent variable: Frequency of use of generative AI tools.
Table 10. Regression Coefficients for relationship between knowledge of AI and frequency of use.
Table 10. Regression Coefficients for relationship between knowledge of AI and frequency of use.
Unstandardised CoefficientsStandardised Coefficients
ModelBStd. Errorβtp95% CI
1Constant3.700.93 3.99p < 0.001[1.852, 5.532]
Knowledge−0.290.25−0.12−1.120.27[−0.791, 0.221]
Dependent variable: Frequency of use of generative AI tools.
Table 11. Regression analysis for relationship between perceived ease of use, perceived benefits, perceived concerns, and frequency of use.
Table 11. Regression analysis for relationship between perceived ease of use, perceived benefits, perceived concerns, and frequency of use.
Unstandardised CoefficientsStandardised Coefficients
ModelBStd. Errorβtp95% CI
1Constant4.770.80 5.95p < 0.001[3.177, 6.369]
Ease of Use−0.310.14−0.25−2.220.03[−0.591, −0.032]
Benefits−0.420.20−0.24−2.100.04[−0.820, −0.023]
Concerns0.260.140.191.900.06[−0.014, 0.526]
Dependent Variable: Frequency of Use of generative AI tools.
Table 12. Regression analysis for relationship between perceived concerns about generative AI in education and the belief that generative AI will steal jobs.
Table 12. Regression analysis for relationship between perceived concerns about generative AI in education and the belief that generative AI will steal jobs.
Unstandardised CoefficientsStandardised Coefficients
ModelBStd. Errorβtp95% CI
1Constant2.130.42 5.13p < 0.001[1.301, 2.951]
Concern0.390.160.262.440.02[0.072, 0.706]
Dependent Variable: [R22] Generative AI technology steals people’s jobs.
Table 13. Regression analysis for relationship between perceived concerns about generative AI in education and the belief that generative AI will negatively impact individual employment prospects.
Table 13. Regression analysis for relationship between perceived concerns about generative AI in education and the belief that generative AI will negatively impact individual employment prospects.
Unstandardised CoefficientsStandardised Coefficients
ModelBStd. Errorβtp95% CI
1Constant2.840.40 7.16p < 0.001[2.051, 3.631]
Concern0.170.15.0.121.080.28[−0.138, 0.468]
Dependent variable: [R23] Generative AI technology will make it harder for me to find a job after graduation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, K.C.; Chong, G.H.L.; Wong, B.T.M.; Wu, M.M.F. A TAM-Based Analysis of Hong Kong Undergraduate Students’ Attitudes Toward Generative AI in Higher Education and Employment. Educ. Sci. 2025, 15, 798. https://doi.org/10.3390/educsci15070798

AMA Style

Li KC, Chong GHL, Wong BTM, Wu MMF. A TAM-Based Analysis of Hong Kong Undergraduate Students’ Attitudes Toward Generative AI in Higher Education and Employment. Education Sciences. 2025; 15(7):798. https://doi.org/10.3390/educsci15070798

Chicago/Turabian Style

Li, Kam Cheong, Grace Ho Lan Chong, Billy Tak Ming Wong, and Manfred Man Fat Wu. 2025. "A TAM-Based Analysis of Hong Kong Undergraduate Students’ Attitudes Toward Generative AI in Higher Education and Employment" Education Sciences 15, no. 7: 798. https://doi.org/10.3390/educsci15070798

APA Style

Li, K. C., Chong, G. H. L., Wong, B. T. M., & Wu, M. M. F. (2025). A TAM-Based Analysis of Hong Kong Undergraduate Students’ Attitudes Toward Generative AI in Higher Education and Employment. Education Sciences, 15(7), 798. https://doi.org/10.3390/educsci15070798

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop