1. Introduction
Artificial intelligence (AI) has developed at a revolutionary pace in recent years and is merging into our daily lives at an astonishing rate (
Kelly et al., 2023). In the education sector, as in other sectors, these technologies are increasingly being integrated, offering numerous potential benefits for teaching and learning. For example, personalised education and innovation in educational materials can increase efficiency (
Almogren et al., 2024). Since its introduction, ChatGPT has emerged as one of the most widely adopted learning tools among Generation Z university students, partly due to their high level of technological adaptability and the seamless integration of digital tools into their everyday lives (
Shahzad et al., 2024). However, the success of these technologies depends not only on their technical capabilities but also on students’ attitudes, motivation, and perceived usefulness (
Teo & Noyes, 2011).
Recently, a specific subset of AI, generative artificial intelligence, has gained prominence in educational settings. Generative AI (GenAI) refers to systems that can autonomously produce content, such as text, images, or code, based on training data and user prompts (
Koteczki et al., 2025). Tools such as ChatGPT, Copilot, or DALL·E have become increasingly accessible to students and are now among the most frequently used technologies in higher education (
Chan & Hu, 2023;
Almassaad et al., 2024). Their use ranges from brainstorming and summarization to code generation and academic writing assistance. While these tools offer new learning opportunities, their widespread adoption raises questions about trust, ethics, academic integrity, and digital literacy—especially among Generation Z students, who are considered digital natives but not necessarily critical users (
Pitts et al., 2025)
There is a dramatic increase in the use of Gen AI, which is increasingly generating value for companies. According to the survey, 65% of the respondents reported that their organisations regularly use Gen AI, which is almost double the figure for the previous year. Parallel to the spread of its application, expectations have also risen, with 75% believing that technology could bring significant or revolutionary change to their industries in the coming years. Companies are also using AI on a global scale, with the largest growth seen in Asia and Greater China (
Singla et al., 2024). The global market size of the machine learning segment of the AI market is expected to grow steadily between 2024 and 2030, reaching a total of
$424.1 billion (+534.87%). After seven consecutive years of growth, the market size is estimated to reach
$503.41 billion in 2030, reaching a new peak (
Statista, 2024a). In 2024, the use of AI has seen a remarkable rise among global organisations. The proportion of companies that integrate AI into at least one business function has increased dramatically to 72%, a significant jump from 55% in the previous year. Even more striking is the exponential growth of Gen AI, which has been adopted by 65% of organisations worldwide (
Statista, 2024b). This surge in corporate adoption of AI places increasing pressure on educational institutions to prepare students for AI-driven workplaces. As AI becomes embedded in various business functions, universities are expected to equip students not only with digital literacy but with AI-specific competencies. This alignment is particularly critical for Generation Z, whose transition to the labour market coincides with this technological shift. Consequently, higher education is increasingly viewed as a vital space for developing AI fluency, ethical awareness, and practical skills relevant to professional contexts (
Sergeeva et al., 2025;
Mogaji et al., 2024). The use of artificial intelligence in education has grown significantly in recent years. The global AI in the education market was valued at
$2.5 billion in 2022 and is expected to grow to
$6 billion by 2025 (
AIPRM, 2023). According to a survey conducted in the United States, 33% of adults believed that AI had a somewhat negative impact on the education sector. Around 32% perceived a positive impact, 20% said they saw no positive or negative impact, and 15% were uncertain (
Statista, 2024c).
With the explosive growth of technology, it is therefore essential to understand the attitudes and willingness of young people, as their attitudes fundamentally influence the effective integration of innovative tools into education (
Habibi et al., 2023). The relevance of this topic in Hungary is demonstrated by the fact that the use of AI tools has become a topical issue in Hungarian higher education. Institutions are responding to this phenomenon with regulations and training, while students are creatively integrating tools such as ChatGPT into their daily learning practices (
Habibi et al., 2023). In Hungary, the integration of AI, especially generative AI, into higher education is in an early but rapidly evolving phase. Although universities have started to issue institutional-level guidelines on ethical AI use and academic integrity, there is no unified national regulation yet (
Dabis & Csáki, 2024). In light of a recent government initiative in Hungary, higher education institutions are now required to review and revise their internal regulations on the use of AI by 1 September 2025, which further highlights the urgency of understanding student perspectives in shaping these institutional policies (
Eurydice, 2025). Compared to countries with more established AI policies (e.g., UK, US), Hungary follows a more decentralised, experimental model that grants significant autonomy to each institution (
Wu et al., 2024;
Daskalaki et al., 2024). This status quo presents a unique research opportunity: to explore student attitudes during a transitional policy environment. By focussing on Generation Z university students in Hungary, this study captures perspectives in a formative stage before norms are fully established. These insights can inform the development of more context-sensitive institutional strategies that align regulation, ethical standards, and actual student needs and expectations.
The integration of AI into education has received increasing attention over the past decade, as it offers numerous opportunities to improve the effectiveness of teaching and learning. AI applications in education can be classified into three main paradigms: the AI-directed approach, in which the learner is a recipient; the AI-supported model, in which the learner is a collaborative partner; and the AI-augmented approach, which places the learner in an autonomous guiding role (
Mustafa et al., 2024). These paradigms contribute to educational practices in various ways, for example, by developing adaptive learning paths, using intelligent tutoring systems, and providing real-time feedback on student performance (
Boussouf et al., 2024). One of the greatest advantages of AI in education is its ability to provide a learning experience tailored to the individual needs of learners, which increases motivation and improves learning outcomes (
Chardonnens, 2025). It also creates opportunities for automating administrative tasks, allowing teachers to focus more on personal student support (
Chardonnens, 2025). At the same time, the use of AI raises a number of challenges, particularly in relation to data protection, ethical responsibility, and technological inequalities, which are key to ensuring responsible and sustainable use (
Basch et al., 2025).
Generation Z—young people born between 1997 and 2012—are digital natives who have grown up surrounded by technological devices. This generation is particularly open to new technologies, including the use of AI in education. Research has shown that members of Generation Z often find AI-based learning tools effective and useful, especially when they make the learning process more personalised and relevant (
Lee et al., 2025). However, concerns about the use of AI in education are also emerging among Generation Z, mainly in relation to data security, ethical dilemmas, and the dangers of technological dependence (
Basch et al., 2025;
Chardonnens, 2025). Educational institutions must take these considerations into account when developing policies and practices that promote the responsible, ethical and equitable use of AI. Mapping Generation Z students’ attitudes toward AI is essential for the successful and effective implementation of educational technology tools. Based on student feedback and experiences, educational strategies can be developed that not only meet the expectations of modern learners but are also capable of managing the risks and challenges associated with the use of AI (
Mustafa et al., 2024). Ultimately, the success of educational AI systems depends on how well they can adapt to students’ needs, learning styles, and attitudes toward technology. Recent studies confirm that learners’ perceptions significantly shape the adoption of AI-based educational tools. Students often appreciate AI’s ability to support writing, feedback, and flexible learning, particularly in voluntary and autonomous settings (
Chan & Hu, 2023;
Almassaad et al., 2024). However, psychological and contextual factors, such as perceived usefulness or emotional engagement, continue to influence acceptance patterns (
Pitts et al., 2025).
At the same time, multiple barriers and concerns have been identified, which limit full acceptance. These include ethical dilemmas, such as the risk of plagiarism and academic dishonesty, data privacy issues, limited trust in the accuracy of AI-generated content; and a growing fear that overreliance on AI could undermine critical thinking or creativity (
Pitts et al., 2025;
Vieriu & Petrea, 2025). In
Pitts et al.’s (
2025) survey of American college students, respondents expressed ambivalence: while many saw AI as a productivity tool, others worried that it might cause them or widen inequality between students with differing levels of digital competence.
These mixed findings highlight the importance of further research to explore how individual-level factors, such as perceived usefulness, enjoyment, and social influence, shape AI acceptance. Moreover, generational aspects (e.g., digital nativeness in Generation Z) and cultural context (e.g., institutional norms in Hungary) may also affect how students relate to AI. These insights have led us to investigate not only the psychological, technological, and social drivers of AI acceptance but also the heterogeneity of student attitudes by identifying distinct user groups with varying levels of openness and behavioural intention. As Hungarian universities begin to develop ethical guidelines and institutional policies for the responsible use of generative AI tools, it is becoming increasingly important to include student perspectives in this process. The success of AI integration in higher education depends not only on regulation but also on how students perceive and accept these technologies. Despite international progress, little is known about how Hungarian students, particularly Generation Z, evaluate these tools in terms of trust, usefulness, or motivation. Therefore, examining their views is essential to ensure that emerging policies and pedagogical strategies reflect the needs, concerns, and expectations of actual users. Our study addresses this gap by combining the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT2), and by applying both PLS-SEM and cluster analysis to a sample of Hungarian Generation Z university students. Through this approach, our aim is to provide a nuanced understanding of what supports or hinders the adoption of AI tools in higher education and to provide implications for policy, tool design, and digital pedagogy.
This study investigates the acceptance of AI technologies among Generation Z university students in Hungary, applying the theoretical frameworks of the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT). While previous research has explored AI adoption in higher education, the majority of these studies focus on students from large educational systems. On the contrary, this study provides insight into a culturally and institutionally distinct context that remains under-represented in the international literature. Beyond analysing key acceptance factors, the study uniquely contributes by segmenting students into distinct attitudinal clusters, offering not only a deeper understanding of heterogeneous user needs but also practical guidance for personalised educational interventions in AI integration. This segmentation provides added value to the literature by moving beyond aggregate analyses and uncovering meaningful subgroups with distinct motivational patterns, which can help universities develop more targeted AI integration strategies, such as differentiated training, communication, and support programmes. The combined use of cluster analysis and PLS-SEM allows, on the one hand, the identification of different groups of students based on their attitudes toward AI and, on the other hand, the simultaneous testing and validation of the relationships between acceptance dimensions.
4. Results
This chapter presents the main results of the questionnaire survey, which is divided into three parts. First, descriptive statistical analysis is used to review the respondents’ perceptions and attitudes toward AI, highlighting the mean values, standard deviation, and variance of each dimension. Second, cluster analysis is used to identify different groups of students that show different patterns of AI adoption. Third, the results of structural equation modelling (PLS-SEM) are presented, which reveal the relationships between the individual variables.
4.1. Descriptive Statistics
Table 4 presents attitudes and intentions to use AI based on descriptive statistical indicators. The mean, median, mode, standard deviation, and variance values in the table provide a more detailed picture of the attitudes towards different aspects of AI.
The results of the descriptive statistical analysis show that students generally have a positive attitude toward AI use, especially PEoU where the mean scores ranged from 3.84 to 3.87 and PU where the mean scores ranged from 3.71 to 3.84. This suggests that most of the respondents see AI as a practical and easy-to-use tool that can help them in their daily activities. Attitude and BI also show favourable scores, suggesting that students are open to using AI, although there is a slight variation in the extent to which they plan to use it. SI received the lowest mean score (2.66–3.23) of the dimensions tested, suggesting that students do not feel strong external pressure or expectation to use AI. This suggests that your acceptance of AI is driven more by personal factors than by the opinions of the people around you and society. Enjoyment (E) shows a moderately high score with a mean value between 3.54 and 3.66, suggesting that although the use of AI can be enjoyable, students view it as a functional tool rather than a fun technology.
The analysis of the variance and dispersion of the data shows that respondents’ opinions are more homogeneous on some dimensions, while in others there are larger differences. The Perceived Ease of Use (PEoU: SD = 0.96–0.97, variance = 0.93–0.94) and Perceived Usefulness (PU: SD = 0.99–1.01, variance = 0.98–1.02) scores show relatively low variance, indicating that most students have similar perceptions of the ease of use and usefulness of AI. On the contrary, there is greater variance for the social influence (SI: SD = 1.20, variance = 1.43) and Behavioural Intention (BI: SD = 1.12–1.19, Variance = 1.25–1.41) variables, suggesting that there are significant differences between students in the extent to which they feel the influence of the social environment and the extent to which they plan to use AI. The variance of the attitude (A: SD = 0.97–1.08, variance = 0.95–1.17) and Enjoyment (E: The SD = 1.10–1.13, variance = 1.21–1.28) dimensions are also relatively high, indicating that students have different ratings of their overall attitudes and enjoyment factors toward AI use. These results suggest that, while the respondents’ perceptions of the usefulness and manageability of AI are relatively consistent, there are larger differences in individual attitudes, social influence, and enjoyment factors.
4.2. Cluster Analysis
Following the descriptive statistical analysis, the results of the cluster analysis are presented to identify different patterns of AI adoption. Statistical analysis has been used to examine four clusters with different levels of technological openness, adoption levels, and willingness to buy.
4.2.1. Cluster 1—Moderately Open Technology Users
The first cluster, named “Moderately Open Technology Users”, includes 72 respondents who are moderately open to new technologies but are not considered to be innovators. Members of this cluster tend to have a positive attitude towards AI but do not show outstanding enthusiasm and openness towards new technologies. They accept the use of AI but do not necessarily consider it indispensable in their daily lives or consider social influences to play a decisive role in their decisions.
Table 5 shows the minimum and maximum values, means, and standard deviation for the attitude statements of the first clusters.
The descriptive statistics of the first cluster show that the members of this group are moderately open to new technologies. Among the variables measuring general attitudes towards technology (UNT), the highest mean value is UNT4 (3.74), indicating that the members of the cluster tend to agree that new technologies can have a positive impact on their lives. On the other hand, UNT5 (2.65), which measures the willingness to buy new technology products, has the lowest mean value, indicating that they are less willing to spend on new technologies and do not necessarily feel the need for them in their daily lives. Variables measuring the ease of use of AI (PEoU1 and PEoU2) show medium-high scores, indicating that, in general, the members of the cluster tend to find AI easy to use. Perceptions of AI usefulness (PU1-PU3) are slightly positive but not outstanding (3.14–3.32), indicating that cluster members do not necessarily see AI as an indispensable tool but do see it as useful to some extent. The general attitude towards the use of AI (A1-A3) is mixed. Of the attitudinal variables, A1 (2.92) has the lowest value, indicating that the members of the cluster do not have a clear positive perception of AI use. The values of A2 and A3 are slightly higher, suggesting that some of the cluster perceive some advantage in using AI, but this is not clear to all. Social influence (SI) scores the lowest average of all dimensions (SI1: M = 2.07, SI2: M = 2.78), indicating that the members of the cluster feel little external pressure to adopt AI and are not particularly encouraged by their environment to use it. Variables measuring the willingness to use AI (BI1-BI3) also show relatively low values (M = 2.71–3.03), suggesting that the members of the cluster do not have a clear commitment to actively use AI, although they may consider using it in some cases. The enjoyment factor (E) scores are medium (3.03 and 2.99), suggesting that although they do not find AI use particularly fun, they do not consider it to be particularly boring or unpleasant.
4.2.2. Cluster 2—Positive AI Adopters
The second cluster was named “Positive AI Adopters” and consists of 112 participants. They can be considered to be more open to AI than the first cluster. Overall, the technology is viewed positively, but none of the average values for the dimensions examined exceed 4, with average values ranging from 3.17 to 3.88. In their case, social influences do not play a significant role, but this target group is likely to become active AI users and shows a positive attitude towards AI applications.
Table 6 shows the descriptive statistics of the second clusters.
The statistical results of the second group show that the members of this group are generally open to new technologies and AI. Compared to the first cluster, their average scores are slightly higher in all dimensions. Among the variables measuring technological attitudes (UNT), UNT4 was the most agreed upon, with an average score of 4.00, suggesting that, overall, they believe that technological innovations have a positive impact on their lives. However, their willingness to purchase new technological products (UNT5) is quite low, with an average value of 2.66. In terms of the ease of use of AI (PEoU), we see moderately high values, suggesting that the members of this cluster consider AI to be an easy-to-use technology. Perception of AI usefulness (PU) also shows high values, indicating that the group clearly sees the benefits and effectiveness of the AI application. The general attitude toward AI use (A) is also positive, while social influence (SI) is slightly lower, suggesting that although the influence of the social environment is present, AI acceptance is primarily driven by internal motivation. The intention to use AI (BI) shows high values (3.65–3.93), which means that the members are likely to become active AI users in the future. The enjoyment factor (E) is also strong (3.66–3.86), indicating that the group considers AI use not only useful but also enjoyable.
4.2.3. Cluster 3—Sceptical and Cautious Users
The third cluster is the least accepting of AI, which is why they have been labelled “Sceptical and Cautious Users.” This group includes the 36 respondents who are the most moderate and least interested in new technologies. Participants in this group do not really see AI as a useful tool and are less likely to see its positive effects in their daily lives. Although they are willing to use the technology to some extent, their sceptical attitude and low commitment mean that they cannot be considered the most active users.
Table 7 shows the descriptive statistics of the third clusters.
In the third cluster, attitudes toward general technology show the highest values, which may suggest that these participants are more dismissive of AI than they are of technology in general. Among the variables measuring technological attitudes (UNT), UNT4 has the highest value, suggesting that members of this group recognise the benefits of technological development to some extent, but are not willing to purchase new technological products (UNT5). Variables that measure AI ease of use (PEoU) show moderately low values, suggesting that group members do not necessarily find AI easy to use. Furthermore, the perception of usefulness is also low, suggesting that participants do not clearly see the benefits of AI applications. The general attitude towards the use of AI (A) also shows low values (M = 2.08–2.25). Average values of social influence (SI) can also be considered low, as they range between 1.64 and 2.17 on a scale of 5. Of the dimensions examined, the lowest average values are for the intention to use (BI), from which we can conclude that these 36 respondents do not intend to use AI-based applications and tools in the future. Also, they do not find them pleasant or entertaining.
4.2.4. Cluster 4—AI Enthusiasts and Innovators
Cluster 4 (AI Enthusiasts and Innovators) comprises a total of 82 individuals who have an overwhelmingly positive attitude toward AI technologies and new innovations. Members of this group demonstrate a high level of technological openness and consider AI not only useful but also a key tool in their daily lives. They not only support the acceptance and application of AI but also actively seek new technological opportunities, making them the most committed and enthusiastic AI users in the sample.
Table 8 shows the results of the fourth group in descending order based on the mean. This cluster is the most open to new technologies and AI, and its statistical results show that the members of the group are exceptionally open. Among the variables measuring technological attitudes (UNT1–UNT6), UNT4 (M = 4.39, SD = 0.750) and UNT6 (M = 4.18, SD = 0.848) show the highest values, suggesting that cluster members are extremely positive about technological developments and feel that AI can greatly contribute to their lives. The willingness to buy new technological products is also outstanding, with an average value of 3.29 compared to the other cluster groups. Variables measuring the ease of use of AI also show the highest values among all groups (M = 4.46 and 4.57), indicating that members find AI extremely simple and intuitive to use. The same can be said about the perception of the usefulness of AI and their general attitudes, as they consider it an effective and necessary technology. The values of social influence (SI1-SI2) (M = 3.29 and 3.90) suggest that their environment also supports the use of AI, but their acceptance is primarily due to internal motivation. The intention to use AI (BI1–BI3) is remarkably high (M = 4.65–4.73), clearly indicating that the members of the cluster are committed to actively using AI. The enjoyment factor (E1–E2) is also the highest among all clusters, indicating that cluster members find the use of AI particularly enjoyable.
Table 9 shows the four different groups that emerged from the cluster analysis based on the acceptance of AI by Generation Z university students. The groups show different attitudes and behavioural characteristics in relation to AI technologies. The first cluster is moderately open to technological innovations, with a slightly positive attitude toward AI, but cannot be considered active or committed users, and their willingness to purchase is low. In contrast, the second cluster is highly open, clearly positive about the acceptance of AI, and with a moderate willingness to purchase, it is likely that they will become active AI users. The third and smallest group is explicitly sceptical and resistant to the use of AI technologies, with minimal willingness to purchase and little perceived social pressure. Finally, the fourth cluster comprises the most innovative and enthusiastic AI users, who are extremely open, actively seek out new technologies, and are strongly influenced by social factors. In general, there are significant differences between students in their willingness to accept AI, their openness to technology, and their sensitivity to social influences. Based on the results, the innovator and positive adopter groups are likely to be active users of future AI technologies, while a separate strategy may be needed to convince the sceptic group.
Table 10 shows the outer loadings of each item within the measurement model across six constructs: Perceived Usefulness, Perceived Ease of Use, Attitude, Social Influence, Enjoyment, and Behavioural Intention. All item loadings exceed the commonly accepted threshold of 0.70, indicating good convergent validity. The strongest loadings appear within the constructs of Enjoyment and Perceived Usefulness, suggesting high internal consistency and reliability of these scales in capturing students’ perceptions toward AI adoption.
Table 11 shows the internal consistency (Cronbach’s alpha and composite reliability) and convergent validity (AVE, explained average variance) of the measurement model for each construct. The Cronbach’s alpha values indicate how reliably the individual items measure the given construct; a value above 0.7 indicates acceptable reliability. Composite reliability (rho_a and rho_c) indicates the internal consistency of the items within the constructs, where values above 0.7 are also considered acceptable. The AVE measures how much variance a given construct explains from the associated measurement items; the minimum acceptable AVE values are generally 0.5. According to the results, most constructs (A, BI, E, PEOU, PU) achieve excellent reliability and validity indices, while in the case of Social Influence (SI), the Cronbach alpha is lower, indicating that this construct may need further refinement in future research.
4.2.5. Evaluation
In this section, the results of the PLS-SEM used in the investigation are presented. The purpose of the model is to reveal the strength of the relationships between the constructs examined and the direct impact of the factors determining the acceptance of AI technology on the intention to behave.
Figure 4 shows the relationships between the latent variables and their strengths (path coefficients), as well as the factor loadings associated with each indicator. We also indicate the explanatory power of the model (R
2 value), which shows to what extent the factors examined explain the change in the intention of students to accept AI.
Figure 5 presents the results of the structural model, in which we examined the relationships between the latent variables under investigation and their strengths in the context of AI acceptance. The left side of the figure shows the constructs (e.g., PU) for which the related indicators (e.g., PU1, PU2, PU3) show high factor loadings in the range of 0.827 to 0.966. These measures the given constructs well, so the measurement model is adequately validated and reliable.
The path coefficients shown in the central part of the model indicate the direct effect of each construct on BI. The strongest positive effect is shown by Enjoyment (β = 0.605), indicating that enjoyment of using AI technology is a key factor in shaping university students’ intention to use it. This is followed by the Perceived Usefulness construct (β = 0.167), which also significantly supports the acceptance of AI. The Attitude (β = 0.100), perceived ease of Use (β = 0.033) and Social Influence (β = 0.111) constructs show a more moderate, although statistically significant, relationship with intention of use, suggesting that although these are also relevant factors, they are less decisive in acceptance of AI.
The explanatory power of the behavioural intention construct (R2 = 0.814) is exceptionally high, which means that the factors included in the model explain the development of intentions of AI usage very well. Based on the results obtained, it is important to emphasise for future research that highlighting the enjoyment value of use and demonstrating the practical benefits of AI play a particularly important role. Less influential factors, such as social influence or ease of use, may be worth exploring in more depth in future research to better understand how they can support the widespread acceptance of AI technologies among students.
Table 12 shows the variance inflation factors (VIF) used to check multicollinearity for the constructs examined. The VIF values can be used to examine whether there is a strong correlation between the independent variables in the model, which could distort the regression results. The generally accepted threshold value for VIF is 5; values below this indicate that there are no serious multicollinearity problems between the constructs. Based on the results, all VIF values for the examined constructs (perceived usefulness, perceived ease of Use, Attitude, Social Influence, and Enjoyment) remained below the accepted threshold (1.65–2.12), so it can be concluded that the model is reliable and there is no significant multicollinearity bias between the latent variables.
4.2.6. Correlation
Table 13 illustrates the correlation between the latent variables examined. Based on the values obtained, there is a strong positive correlation between behavioural intention and enjoyment (0.877), suggesting that the enjoyment value of using AI tools is closely related to university students’ intention to actually use them. Perceived Usefulness (0.744) and Attitude (0.752) also show a significant relationship with behavioural intention, confirming that these factors greatly influence intention to use AI. Social influence shows a moderately strong relationship with behavioural intention (0.615), while perceived ease of use shows a weaker, moderate relationship (0.490). Comparing the results of the PLS model, we see that although the correlation matrix shows strong relationships between several factors and the intention to use, the structural model differentiated the strength of the effects more accurately. The high correlation of the enjoyment factor is consistent with the strong path coefficient of the PLS model (β = 0.605), which highlights the primary role of this construct. However, despite the moderate correlation of the social influence factor, it has a weak direct effect on the PLS model (β = 0.111), indicating that although there is a general relationship, the direct influence of the construct on the intention is limited.
5. Discussion
Research focusing on the attitudes and behavioural intentions of Generation Z university students offers valuable insights for the more effective integration of AI tools in higher education. The relevance of this topic is underscored by the growing number of Hungarian universities that are developing institutional guidelines and training initiatives around AI, while students themselves are already creatively incorporating tools like ChatGPT into their everyday learning routines.
In this context, it is particularly important to explore the factors that encourage or discourage young people’s use of AI, as their attitudes and experiences fundamentally determine the educational integration of innovative tools. The results show that the acceptance of AI tools is primarily shaped by their own perceptions and experiences. Among psychological factors, students’ general attitude towards AI stands out: those who have a favourable, open attitude towards the technology also have a much higher intention to use it. This is consistent with the basic tenets of technology acceptance models (e.g., TAM, TPB), which state that positive attitudes increase behavioural intention. Another psychological factor is the level of intrinsic motivation or enjoyment: if students find using AI enjoyable, this reinforces their positive attitude and motivation towards the tool. In our research, we found that the experience factor of AI use contributes significantly to willingness, indicating that for young users, it is not only what the technology can do that matters, but also how it feels to use it.
The cluster analysis based on attitudes revealed four different student groups, indicating that generation Z university students do not have a uniform attitude toward AI. At one extreme are the “AI enthusiasts and innovators,” who are extremely open and enthusiastic about new technologies. They are characterised by very positive attitudes and high usage intentions: They are committed to using AI tools in their studies in the future. For them, the use of AI is almost self-evident, a desirable innovation, and their technological openness is outstanding. At the other end of a spectrum, there is the group of “sceptical and cautious users” with a low level of acceptance. They are mostly distrustful or uncertain about AI tools, and their attitude is rather negative or hesitant, manifesting itself in weak behavioural intentions; they do not plan to actively use these tools. Their technological openness is also limited: many of them would only use AI if they had to, or if its use was completely simple and risk-free. Two intermediate segments can also be identified between the two extreme groups. These four segments are clearly distinct in their behavioural intentions and degree of technological openness. The results also statistically confirm that there are significant differences between the groups in this area. All this suggests that the student population is not homogeneous. There are different attitude groups that need to be addressed and supported in different ways during the integration of AI.
The third question of our research focused on the role played by emotional (for example, experiential) and functional (for example, utility) factors in the intention to use technology (RQ3). Our results show that both factors are significant, but the influence of emotional motives is particularly strong. Specifically, structural modelling showed that “Perceived Enjoyment” is the strongest predictor of usage intention, even preceding the effect of usefulness. If students experience using an AI tool as an enjoyable experience and a motivating process, they are more likely to continue using it than if they simply know that the tool is useful. Of course, functional benefits remain important: students need to see concrete advantages (such as better grades, time savings, and more efficient learning) in order to commit to a technology in the long term. This finding is consistent with some of the results of recent international literature. For example, UTAUT2-based research in Russia examining the adoption of generative AI showed that hedonic motivation (the enjoyment factor) significantly increased students’ intention to use AI (
Sergeeva et al., 2025). Similarly, in a survey of Chinese university students, those who found ChatGPT entertaining were much more likely to plan to use the system for language learning purposes in the future (
Xu & Thien, 2025).
All five hypotheses were supported by the PLS-SEM analysis.
H1. Perceived Usefulness (PU) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.
The results confirm that perceived usefulness has a significant positive effect on the behavioural intention to use AI tools, supporting H1. This finding is consistent with previous studies that highlight that when students recognise the academic value and functional utility of AI, they are more likely to adopt it (
Chan & Hu, 2023). In our model, usefulness emerged as one of the strongest predictors of intention, underscoring the importance of task-relevant benefits in the context of learning technologies.
H2. Perceived ease of use (PEOU) has a positive effect on Behavioural Intention to Use (BI) of AI technology among generation Z university students.
Although perceived ease of use did not have a strong direct effect on behavioural intention, it significantly influenced attitude, which, in turn, impacted intention. This mediating effect suggests that Generation Z students are more likely to adopt AI tools when they perceive them as intuitive and simple to use—an effect that operates primarily through the formation of positive attitudes. This supports earlier TAM-based research emphasising usability as a key antecedent of attitude (
Venkatesh & Davis, 2000).
H3. Social influence (SI) has a positive effect on Behavioural Intention to Use (BI) of AI technology among Gen Z university students.
Although social influence was found to be a statistically significant predictor of behavioural intention, its effect size was relatively weak (β = 0.111). This suggests that while peer norms, instructor attitudes, and the general academic environment may play a role in shaping students’ decisions to adopt AI tools, these factors are less influential compared to internal motivations such as enjoyment or perceived usefulness. It indicates that generation Z students may rely more on personal evaluations than on external expectations when it comes to adopting emerging technologies like generative AI.
H4. Attitude (A) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.
Attitude toward using AI tools was found to have a significant positive effect on behavioural intention, confirming H4. This indicates that students who feel positively about AI are more inclined to use it in their academic routines. This finding reinforces the role of affective evaluations in technology acceptance models, particularly in voluntary learning environments, where internal motivation plays a crucial role.
H5. Enjoyment (E) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.
H5 was strongly supported. Enjoyment, which represents hedonic motivation, had the highest standardised path coefficient in the model (β = 0.605), making it the most powerful predictor of behavioural intention. This finding underscores that emotional engagement and user experience are central to the adoption of AI tools among Generation Z, who are known to value interactivity, gamification, and personalisation in digital environments (
Strzelecki, 2024).
Our results are consistent with findings in the international literature, particularly in emphasising the role of enjoyment and attitude.
Xu and Thien (
2025) studied Chinese university students and found that perceived enjoyment plays a mediating role in the formation of usage intention and that social influence also had a positive effect on willingness to use. This confirms that emotional factors and the social environment can have a significant influence on the acceptance of AI tools, especially in a language learning context. Similarly,
Strzelecki (
2024) found that hedonic motivation has a positive effect on the intention to use ChatGPT, while the effect of social influence proved to be negligible. This is in line with our findings, where enjoyment was also the strongest predictor, while the role of social influence appeared to be relatively weaker. Other studies also emphasise usefulness (PU), ease of use (PEOU) and their indirect impact on intention to use.
Zou and Huang (
2023) showed in their research among doctoral students that perceived usefulness and ease of use indirectly influence the use of ChatGPT through attitude, as we also found in our research. At the same time,
Hoi’s (
2020) study on the use of mobile technologies for language learning also highlights the importance of usability and experientiality. Based on this, the responses of Hungarian Generation Z students are well aligned with international trends, but they also emphasise immediate enjoyment as a driving force for acceptance, which may also refer to cultural characteristics.
The findings highlighted enjoyment and attitude as the strongest predictors of behavioural intention, while perceived usefulness and social influence also played a significant role. In contrast,
Habibi et al. (
2023), using UTAUT2 with Indonesian university students, found that facilitating conditions were the most important driver of behavioural intention, while effort expectancy was not significant. Although perceived usefulness was included, it did not emerge as the strongest factor, suggesting contextual or infrastructural differences. Similarly,
Barakat et al. (
2025), who focused on university educators, found that anxiety and perceived risk had a considerable impact alongside perceived usefulness and social influence. These contrasts suggest that while usefulness and social influence are widely relevant, their impact varies based on user role, institutional maturity, and local digital readiness.
In summary, the data analysis followed three main directions. First, the structural model (PLS-SEM) examined the effects of six theoretical constructs on the intention of behavioural use AI. Second, the cluster analysis segmented the students into four groups based on their openness to technology, level of adoption, and willingness to pay. Third, correlation analysis was used to explore the strength and direction of the relationships between the variables examined. Together, these complementary methods provided a comprehensive understanding of AI acceptance among Hungarian university students.
7. Conclusions
The aim of this research was to explore the factors influencing Generation Z university students’ attitudes toward AI technologies and, overall, how open and accepting they are toward them. During questionnaire-based research, we analysed the responses of 302 students, measuring user attitudes and behavioural intentions across multiple dimensions, such as perceived usefulness, usability, experience, attitude, and social influence. Two main methodologies were used to analyse the questionnaire: cluster analysis and PLS-SEM. The cluster analysis identified four distinct student groups, with the group ‘Sceptical and Cautious Users’ showing low acceptance, while the group ‘AI Enthusiasts and Innovators’ is strongly committed to using technology. This suggests that attitudes toward AI vary significantly between student segments, which justifies differentiated education policy and communication strategies. The results showed that among the six factors examined, enjoyment had the strongest and most significant effect on behavioural intention, confirming the importance of hedonic motivation in voluntary use of educational technology. Perceived usefulness also had a positive and significant effect, indicating that students value practical benefits. Attitude and social influence contributed to acceptance to a lesser extent but were still statistically relevant. On the contrary, perceived ease of use showed only a marginal impact, suggesting that Generation Z students, being digital natives, do not consider usability a key barrier to adoption. Behavioural intention, as the outcome variable, was shaped most by emotional and utility-based perceptions, reinforcing the importance of student-centred design in AI-based educational tools.
The significance of the research lies in its ability to simultaneously identify different student attitude groups and quantitatively explore the main factors that influence acceptance using a combined methodological approach. The study offers a new perspective on the integration of AI into education and can help universities develop segment-based intervention strategies. Future research should include longitudinal studies to explore how students’ attitudes toward AI technologies change over time and as the technology evolves. In addition, including additional psychological and sociological factors, such as trust, data security, and identity, could help provide a more complete understanding of the acceptance of technology. The inclusion of other higher education contexts and countries would also allow for the generalisation of results and the exploration of cultural differences.
Based on the results, the acceptance of AI is most influenced by enjoyment and positive attitudes, so developers and educational institutions should offer user-friendly, experience-based AI solutions that are not only useful but also motivating for students. Decision makers should support training related to AI use, which reinforces positive attitudes and reduces resistance arising from a lack of knowledge. It is important to take student feedback into account when integrating AI tools, especially with respect to interactivity and usability. Marketing and communication strategies should emphasise the experience-based benefits of technology. Finally, ensuring ethical and transparent operations is the key to maintaining long-term acceptance.
Based on the findings and limitations identified, several future research directions emerge. One of the most important is the implementation of longitudinal studies. It would be worthwhile to follow the same students over a longer period of time to find out how their attitudes and usage habits change in relation to AI tools, for example, over a semester or during their university years. Second, it would be worthwhile to expand the range of factors examined in the future. Our research focused on the most important components of TAM and UTAUT (usefulness, ease of use, attitude, social influence, enjoyment), but did not cover the full spectrum of psychological and ethical factors related to technology. Future studies could include variables such as trust in AI, cybersecurity/data privacy concerns, technological anxiety, or user competence. Finally, further research should also expand the methodology. Quantitative analysis could be combined with a qualitative approach, such as interviews or focus groups, to gain a deeper understanding of the motivations and concerns. A more detailed description of the clusters we identified would help us understand exactly why a “sceptical” group is distrustful: this could be due to bad experiences, lack of information, or ethical reservations.
Due to the uncertainty among students, higher education institutions develop clear and consistent guidelines on the use of AI tools, especially generative AI such as ChatGPT. These guidelines should include ethical frameworks, allowed uses, and rules for referencing AI-generated content. Additionally, policymakers should encourage the integration of AI into the curriculum, for example, through pilot programmes or credit-bearing courses. Teacher training must also respond to new technological challenges, ensuring that educators have the necessary competencies to apply AI in education. In the long term, it is necessary to create an innovation-friendly higher education environment in which the responsible, regulated, and value-creating use of AI becomes the norm. Overall, the study highlights that the adoption of AI technologies is a complex, multifactorial process and that different types of students require different approaches. Addressing the practical, emotional, and social dimensions together is essential for AI tools to become a natural and valuable part of education.