Next Article in Journal
Enhancing Learning in Microelectronic Circuits: Integrating LTspice Simulations and Structured Reflections in a Design Project
Previous Article in Journal
An Exploratory Cross-Country Study on Italian and Swedish Preschool Teachers’ Role in Supporting Children’s Mathematising During Play
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Generation Z’s Acceptance of Artificial Intelligence in Higher Education: A TAM and UTAUT-Based PLS-SEM and Cluster Analysis

by
Réka Koteczki
* and
Boglárka Eisinger Balassa
Vehicle Industry Research Center, Széchenyi István University, 1. Egyetem tér, 9026 Győr, Hungary
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(8), 1044; https://doi.org/10.3390/educsci15081044
Submission received: 2 July 2025 / Revised: 7 August 2025 / Accepted: 13 August 2025 / Published: 14 August 2025

Abstract

In recent years, the rapid growth of artificial intelligence (AI) has significantly transformed higher education, particularly among Generation Z students who are more open to new technologies. Tools such as ChatGPT are increasingly being used for learning, yet empirical research on their acceptance, especially in Hungary, is limited. This study aims to explore the psychological, technological, and social factors that influence the acceptance of AI among Hungarian university students and to identify different user groups based on their attitudes. The methodological novelty lies in combining two approaches: partial least-squares structural equation modelling (PLS-SEM) and cluster analysis. The survey, based on the TAM and UTAUT models, involved 302 Hungarian students and examined six dimensions of AI acceptance: perceived usefulness, ease of use, attitude, social influence, enjoyment and behavioural intention. The PLS-SEM results show that enjoyment (β = 0.605) is the strongest predictor of the intention to use AI, followed by usefulness (β = 0.167). All other factors also had significant effects. Cluster analysis revealed four groups: AI sceptics, moderately open users, positive acceptors, and AI innovators. The findings highlight that the acceptance of AI is shaped not only by functionality but also by user experience. Educational institutions should, therefore, provide enjoyable and user-friendly AI tools and tailor support to students’ attitude profiles.

Graphical Abstract

1. Introduction

Artificial intelligence (AI) has developed at a revolutionary pace in recent years and is merging into our daily lives at an astonishing rate (Kelly et al., 2023). In the education sector, as in other sectors, these technologies are increasingly being integrated, offering numerous potential benefits for teaching and learning. For example, personalised education and innovation in educational materials can increase efficiency (Almogren et al., 2024). Since its introduction, ChatGPT has emerged as one of the most widely adopted learning tools among Generation Z university students, partly due to their high level of technological adaptability and the seamless integration of digital tools into their everyday lives (Shahzad et al., 2024). However, the success of these technologies depends not only on their technical capabilities but also on students’ attitudes, motivation, and perceived usefulness (Teo & Noyes, 2011).
Recently, a specific subset of AI, generative artificial intelligence, has gained prominence in educational settings. Generative AI (GenAI) refers to systems that can autonomously produce content, such as text, images, or code, based on training data and user prompts (Koteczki et al., 2025). Tools such as ChatGPT, Copilot, or DALL·E have become increasingly accessible to students and are now among the most frequently used technologies in higher education (Chan & Hu, 2023; Almassaad et al., 2024). Their use ranges from brainstorming and summarization to code generation and academic writing assistance. While these tools offer new learning opportunities, their widespread adoption raises questions about trust, ethics, academic integrity, and digital literacy—especially among Generation Z students, who are considered digital natives but not necessarily critical users (Pitts et al., 2025)
There is a dramatic increase in the use of Gen AI, which is increasingly generating value for companies. According to the survey, 65% of the respondents reported that their organisations regularly use Gen AI, which is almost double the figure for the previous year. Parallel to the spread of its application, expectations have also risen, with 75% believing that technology could bring significant or revolutionary change to their industries in the coming years. Companies are also using AI on a global scale, with the largest growth seen in Asia and Greater China (Singla et al., 2024). The global market size of the machine learning segment of the AI market is expected to grow steadily between 2024 and 2030, reaching a total of $424.1 billion (+534.87%). After seven consecutive years of growth, the market size is estimated to reach $503.41 billion in 2030, reaching a new peak (Statista, 2024a). In 2024, the use of AI has seen a remarkable rise among global organisations. The proportion of companies that integrate AI into at least one business function has increased dramatically to 72%, a significant jump from 55% in the previous year. Even more striking is the exponential growth of Gen AI, which has been adopted by 65% of organisations worldwide (Statista, 2024b). This surge in corporate adoption of AI places increasing pressure on educational institutions to prepare students for AI-driven workplaces. As AI becomes embedded in various business functions, universities are expected to equip students not only with digital literacy but with AI-specific competencies. This alignment is particularly critical for Generation Z, whose transition to the labour market coincides with this technological shift. Consequently, higher education is increasingly viewed as a vital space for developing AI fluency, ethical awareness, and practical skills relevant to professional contexts (Sergeeva et al., 2025; Mogaji et al., 2024). The use of artificial intelligence in education has grown significantly in recent years. The global AI in the education market was valued at $2.5 billion in 2022 and is expected to grow to $6 billion by 2025 (AIPRM, 2023). According to a survey conducted in the United States, 33% of adults believed that AI had a somewhat negative impact on the education sector. Around 32% perceived a positive impact, 20% said they saw no positive or negative impact, and 15% were uncertain (Statista, 2024c).
With the explosive growth of technology, it is therefore essential to understand the attitudes and willingness of young people, as their attitudes fundamentally influence the effective integration of innovative tools into education (Habibi et al., 2023). The relevance of this topic in Hungary is demonstrated by the fact that the use of AI tools has become a topical issue in Hungarian higher education. Institutions are responding to this phenomenon with regulations and training, while students are creatively integrating tools such as ChatGPT into their daily learning practices (Habibi et al., 2023). In Hungary, the integration of AI, especially generative AI, into higher education is in an early but rapidly evolving phase. Although universities have started to issue institutional-level guidelines on ethical AI use and academic integrity, there is no unified national regulation yet (Dabis & Csáki, 2024). In light of a recent government initiative in Hungary, higher education institutions are now required to review and revise their internal regulations on the use of AI by 1 September 2025, which further highlights the urgency of understanding student perspectives in shaping these institutional policies (Eurydice, 2025). Compared to countries with more established AI policies (e.g., UK, US), Hungary follows a more decentralised, experimental model that grants significant autonomy to each institution (Wu et al., 2024; Daskalaki et al., 2024). This status quo presents a unique research opportunity: to explore student attitudes during a transitional policy environment. By focussing on Generation Z university students in Hungary, this study captures perspectives in a formative stage before norms are fully established. These insights can inform the development of more context-sensitive institutional strategies that align regulation, ethical standards, and actual student needs and expectations.
The integration of AI into education has received increasing attention over the past decade, as it offers numerous opportunities to improve the effectiveness of teaching and learning. AI applications in education can be classified into three main paradigms: the AI-directed approach, in which the learner is a recipient; the AI-supported model, in which the learner is a collaborative partner; and the AI-augmented approach, which places the learner in an autonomous guiding role (Mustafa et al., 2024). These paradigms contribute to educational practices in various ways, for example, by developing adaptive learning paths, using intelligent tutoring systems, and providing real-time feedback on student performance (Boussouf et al., 2024). One of the greatest advantages of AI in education is its ability to provide a learning experience tailored to the individual needs of learners, which increases motivation and improves learning outcomes (Chardonnens, 2025). It also creates opportunities for automating administrative tasks, allowing teachers to focus more on personal student support (Chardonnens, 2025). At the same time, the use of AI raises a number of challenges, particularly in relation to data protection, ethical responsibility, and technological inequalities, which are key to ensuring responsible and sustainable use (Basch et al., 2025).
Generation Z—young people born between 1997 and 2012—are digital natives who have grown up surrounded by technological devices. This generation is particularly open to new technologies, including the use of AI in education. Research has shown that members of Generation Z often find AI-based learning tools effective and useful, especially when they make the learning process more personalised and relevant (Lee et al., 2025). However, concerns about the use of AI in education are also emerging among Generation Z, mainly in relation to data security, ethical dilemmas, and the dangers of technological dependence (Basch et al., 2025; Chardonnens, 2025). Educational institutions must take these considerations into account when developing policies and practices that promote the responsible, ethical and equitable use of AI. Mapping Generation Z students’ attitudes toward AI is essential for the successful and effective implementation of educational technology tools. Based on student feedback and experiences, educational strategies can be developed that not only meet the expectations of modern learners but are also capable of managing the risks and challenges associated with the use of AI (Mustafa et al., 2024). Ultimately, the success of educational AI systems depends on how well they can adapt to students’ needs, learning styles, and attitudes toward technology. Recent studies confirm that learners’ perceptions significantly shape the adoption of AI-based educational tools. Students often appreciate AI’s ability to support writing, feedback, and flexible learning, particularly in voluntary and autonomous settings (Chan & Hu, 2023; Almassaad et al., 2024). However, psychological and contextual factors, such as perceived usefulness or emotional engagement, continue to influence acceptance patterns (Pitts et al., 2025).
At the same time, multiple barriers and concerns have been identified, which limit full acceptance. These include ethical dilemmas, such as the risk of plagiarism and academic dishonesty, data privacy issues, limited trust in the accuracy of AI-generated content; and a growing fear that overreliance on AI could undermine critical thinking or creativity (Pitts et al., 2025; Vieriu & Petrea, 2025). In Pitts et al.’s (2025) survey of American college students, respondents expressed ambivalence: while many saw AI as a productivity tool, others worried that it might cause them or widen inequality between students with differing levels of digital competence.
These mixed findings highlight the importance of further research to explore how individual-level factors, such as perceived usefulness, enjoyment, and social influence, shape AI acceptance. Moreover, generational aspects (e.g., digital nativeness in Generation Z) and cultural context (e.g., institutional norms in Hungary) may also affect how students relate to AI. These insights have led us to investigate not only the psychological, technological, and social drivers of AI acceptance but also the heterogeneity of student attitudes by identifying distinct user groups with varying levels of openness and behavioural intention. As Hungarian universities begin to develop ethical guidelines and institutional policies for the responsible use of generative AI tools, it is becoming increasingly important to include student perspectives in this process. The success of AI integration in higher education depends not only on regulation but also on how students perceive and accept these technologies. Despite international progress, little is known about how Hungarian students, particularly Generation Z, evaluate these tools in terms of trust, usefulness, or motivation. Therefore, examining their views is essential to ensure that emerging policies and pedagogical strategies reflect the needs, concerns, and expectations of actual users. Our study addresses this gap by combining the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT2), and by applying both PLS-SEM and cluster analysis to a sample of Hungarian Generation Z university students. Through this approach, our aim is to provide a nuanced understanding of what supports or hinders the adoption of AI tools in higher education and to provide implications for policy, tool design, and digital pedagogy.
This study investigates the acceptance of AI technologies among Generation Z university students in Hungary, applying the theoretical frameworks of the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT). While previous research has explored AI adoption in higher education, the majority of these studies focus on students from large educational systems. On the contrary, this study provides insight into a culturally and institutionally distinct context that remains under-represented in the international literature. Beyond analysing key acceptance factors, the study uniquely contributes by segmenting students into distinct attitudinal clusters, offering not only a deeper understanding of heterogeneous user needs but also practical guidance for personalised educational interventions in AI integration. This segmentation provides added value to the literature by moving beyond aggregate analyses and uncovering meaningful subgroups with distinct motivational patterns, which can help universities develop more targeted AI integration strategies, such as differentiated training, communication, and support programmes. The combined use of cluster analysis and PLS-SEM allows, on the one hand, the identification of different groups of students based on their attitudes toward AI and, on the other hand, the simultaneous testing and validation of the relationships between acceptance dimensions.

2. Theoretical Background

2.1. Technology Acceptance Models (TAM and UTAUT)

The study of technology acceptance has long been a focus of attention at the intersection of computer science and social sciences. The successful spread of new technologies has a significant impact not only on individuals but also on organisations and society as a whole (Masadeh & El-Haggar, 2024). Several models have been developed to understand the willingness to use technology. One of the best known and most widely used theoretical frameworks is the Technology Acceptance Model (TAM) developed by Davis (1989). The model is based on two key factors: perceived usefulness (PU) and perceived ease of use (PEOU), which together influence the behavioural intention (BI) to use the technology. Figure 1 shows the framework of the TAM model.
In research examining the adoption of AI technologies, the TAM is often used in an expanded form, taking into account additional contextual factors. Masadeh and El-Haggar (2024) applied a modified TAM to examine the adoption of IoT technologies in higher education in Saudi Arabia. In their study, they identified six external factors (knowledge sharing, mobility, interactivity, innovation, training, and virtual reality) that indirectly influenced the PU and PEOU constructs, thereby increasing students’ willingness to adopt IoT technologies. Applying the TAM, Wang et al. (2023) examined the adoption of AI technologies in e-Commerce and found that subjective norms (the opinions of other individuals or groups) positively influence perceived usefulness and ease of use, while trust has a direct effect on PEOU only. Their findings confirmed that PU and PEOU directly or indirectly influence attitudes and behaviour intentions, which ultimately determine the actual use of technology.
The examination of the acceptance of generative artificial intelligence is also often based on the TAM. Gupta and Yang (2024) analysed the acceptance of ChatGPT technology in a study of start-up companies and combined the TAM with other acceptance models, such as the AIDUA and T-AIA models. The authors concluded that information quality, system usability, and service quality significantly influence acceptance attitudes and intentions in a business environment. Based on their PLS-SEM modelling, variables PU and PEOU continue to play a central role in the adoption of ChatGPT technology (Gupta & Yang, 2024). Further research has examined the applicability of the TAM in various industries. Solomovich and Abraham (2024), for example, analysed the behaviour-shaping effect of ChatGPT in tourism using a TAM-based framework, while Manresa et al. (2024) examined the adoption of generative AI in the workplace in relation to employee commitment. Both studies confirmed that perceived usefulness and perceived ease of use remain key factors in adoption models.
The Unified Theory of Acceptance and Use of Technology (UTAUT) model plays a prominent role in technology acceptance research, especially when it comes to the introduction of new digital solutions, such as AI systems, and predicting their acceptance. The UTAUT model was developed by Venkatesh et al. (2003) with the aim of combining and unifying the main constructs of previously separate technology acceptance models (e.g., TAM, TRA, TPB) into a single framework. The four key constructs of UTAUT (performance expectancy, effort expectancy, social influence, and facilitating conditions) can directly predict behavioural intention and actual use of technology, while gender, age, experience, and voluntariness appear as moderating factors (Venkatesh et al., 2003). The significance of the model is demonstrated by the fact that the original study was able to explain the intent of the user in more than 69% of cases when different technologies were adopted. The model has since been widely used, particularly in empirical research examining user intentions and behaviour in various contexts, ranging from education systems to public services to business and e-commerce applications (Williams et al., 2015).
Recent research has used the UTAUT model to examine the adoption of AI technologies such as ChatGPT. Menon and Shilpa (2023) demonstrated in their qualitative interview-based research that the acceptance of AI systems was dominated by the constructs of performance expectations and effort expectations, while social influence had a particularly significant impact on younger age groups. The existence of a technical infrastructure, that is, the supporting environment, also emerged as a determining factor in shaping user intentions. Furthermore, new context-specific factors are emerging in the application of UTAUT models. For example, privacy and the degree of interactivity have proven to be particularly important factors in influencing variables in the case of generative AI tools (such as ChatGPT), although their impact varies: Privacy concerns, for example, did not significantly influence usage intentions (Menon & Shilpa, 2023). Figure 2 shows the framework of the UTAUT model.
The UTAUT2 model, developed by Venkatesh et al. (2012), adapted the original framework to the consumer context by adding three additional variables: hedonic motivation, price value, and habit. These extended dimensions are particularly relevant when examining digital services such as generative artificial intelligence tools, where usage is often voluntary and strongly influenced by subjective experiences (Venkatesh et al., 2012). In the educational environment, the UTAUT2 model also provided an effective framework to predict students’ intentions to use technology. Sergeeva et al. (2025) confirmed in a quantitative study that performance expectations and hedonic motivation significantly influence students’ behavioural intentions when using AI-based educational technologies. The results of the research suggest that generative AI tools are actively used when they not only appear useful and effective but also provide an enjoyable experience for learners (Sergeeva et al., 2025). Figure 3 shows the framework of the UTAUT2 model.
In general, it can be said that the TAM remains a relevant and reliable tool for examining the adoption of AI technologies, especially when supplemented with additional variables and contextual factors. The continued central role of PU and PEOU can also be confirmed in the case of newer technologies such as generative AI (Mogaji et al., 2024), even though some research already calls for the use of new hybrid models to better understand the complex technological environment. UTAUT and its extended versions represent one of the most well-established and widely used theoretical frameworks in technology acceptance research, especially in dynamically evolving fields such as the educational application of Gen AI.

2.2. The Relationship Between the Dimensions Examined and the Behavioural Intention to Use AI Technology

2.2.1. The Role of Perceived Usefulness (PU) in AI Technology Acceptance

Recent studies consistently highlight perceived usefulness as a key determinant for Generation Z students to adopt generative AI tools in higher education. In technology acceptance models, perceived usefulness (PU) is the extent to which a user believes that using an AI service improves their performance, and this dimension has emerged as one of the strongest predictors of behavioural intent (Strzelecki, 2024; Lo et al., 2024). Students are more inclined to use ChatGPT when they perceive it as beneficial to accomplish academic tasks (e.g., improving writing, research efficiency, or problem-solving). For example, Masadeh & El-Haggar (2024) found that university students’ favourable perceptions of the usefulness significantly improved their attitude towards the tool, which in turn boosted their intention to use it for learning. Similarly, large-scale surveys have shown that AI performance expectancy (analogous to PU) has a direct positive impact on student BI (β ≈ 0.26) in higher education contexts (Strzelecki, 2024). These findings align with prior research on technology acceptance and confirm that when students believe that an AI tool will help them learn more effectively or efficiently, they are more likely to intend to adopt it (Aljohani, 2024; Masadeh & El-Haggar, 2024). The strong influence of PU on AI adoption is evident in various educational settings. Students value generative AI that demonstrably improves their productivity or learning outcomes, and this perceived benefit strongly motivates them to embrace ChatGPT as an integral part of their educational toolkit. In general, PU not only directly shapes the willingness to use AI but also indirectly fosters a positive attitude toward the technology that can further strengthen usage intentions (Masadeh & El-Haggar, 2024). In summary, the recent literature converges on the view that the more useful students perceive an AI application for their studies, the more likely they are to accept and intend to use it (Al-Abdullatif & Alsubaie, 2024).
H1. 
Perceived Usefulness (PU) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.

2.2.2. The Significance of Perceived Ease of Use (PEOU) in Shaping Behavioural Intention

Perceived ease of Use (PEOU) expresses how effortless users find a system to use. If a technological device is easy to use, users find it more useful, develop a more positive attitude toward it, and are more likely to use it. Numerous empirical studies confirm that PEOU has a significant positive effect on both users’ perceived usefulness and their attitudes toward technology, which ultimately increases their intention to use it (Zou & Huang, 2023). For example, Granić and Marangunić (2019) also pointed out in their comprehensive review of the literature that there is a strong direct relationship between ease of use and behaviour intention, as people are more likely to use a tool if it does not require much effort. Recent studies that examine the adoption of generative AI technologies in higher education confirm the prominent role of PEOU. According to a study conducted among Chinese university students, perceived ease of use and perceived usefulness were significant predictors of students’ attitudes toward ChatGPT (Almogren et al., 2024). Similarly, a multi-country study showed that PEOU strongly increases the intention to use ChatGPT, in some cases even exerting a stronger effect on willingness to use than perceived usefulness (Balaskas et al., 2025). This suggests that user-friendly design is particularly important for generative AI tools such as ChatGPT. If students perceive the system as easy to use and easy to learn, they are much more likely to use it in their learning processes.
PEOU is a key factor in shaping behavioural intentions in generative AI applications. If users, whether students or teachers, find that an AI tool (such as ChatGPT) is easy and smooth to use, this has a positive effect on their perception of the tool and increases their intention to use it (Shahzad et al., 2024). In light of the latest empirical findings, higher education institutions should strive to introduce user-friendly AI systems with a minimal learning curve. This will make it easier for students to integrate these tools into their studies, contributing to greater acceptance and effective use of AI technologies (Balaskas et al., 2025).
H2. 
Perceived ease of use (PEOU) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.

2.2.3. The Influence of Social Factors (SI) on Technology Acceptance

Social influence (SI) in models dealing with the acceptance of information systems (e.g., the original TRA/TPB and the TAM2 and UTAUT models based on them) expresses how the opinions and behaviour of others influence an individual’s decision to use a new technology. This includes all the effects that the user’s environment, such as peers, teachers, colleagues, or social norms, has on their attitude toward and intention to use the technology (Dahri et al., 2024). Rogers’ theory of innovation diffusion classically points out that the spread of new ideas and tools is greatly aided by social support, as people tend to imitate the behaviour of groups that are important to them (Bakkabulindi, 2014). In an educational context, this means that if students see that their peers or teachers recommend and consider the use of an AI tool to be ethical, they may view the tool in a more positive light and become more receptive to try it out. In contrast, if the immediate environment is dismissive or critical, this can discourage commitment to technology. Social influence is therefore a form of external pressure or inspiration that shapes individuals’ perceptions and norms regarding technology (Dahri et al., 2024).
A series of empirical studies have examined the impact of social influence on the adoption of technology, and numerous findings confirm its importance in the field of AI tools in higher education. A recent international study that analyses university teachers’ attitudes toward ChatGPT found that support from colleagues and the professional community significantly increased teachers’ intention to use AI technology in teaching (Barakat et al., 2025). A similar pattern can be observed among the student population: A survey conducted in a smart education environment showed that subjective norms, that is, the social expectations perceived by students, positively predict behavioural intentions to use ChatGPT for learning purposes (Al Giffari et al., 2023). However, social influence is not always decisive; some recent studies paint a more nuanced picture. A large-scale survey of students at several universities found that the influence of subjective norms on the intended use of generative AI tools (e.g., ChatGPT)—students primarily base their decisions on their own perceptions of usefulness and attitudes, rather than solely on social pressure (Ursavaş et al., 2025). This finding challenges the traditional assumption that higher education students rely heavily on the opinions of their peers or teachers when it comes to AI acceptance. Instead, it seems that young adult users focus on their personal experiences and the individual benefits offered by the device, with external expectations playing only a secondary role. In summary, although social influence remains a relevant factor, it is essential for successful adoption that the technology itself offers substantive value to users.
H3. 
Social influence (SI) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Gen Z university students.

2.2.4. Attitude (A) as a Determinant of Behavioural Intention Toward AI Use

Attitude refers to a general positive or negative attitude toward the use of technology, that is, the mental disposition and emotional evaluation with which the user approaches a given device (Dahri et al., 2024). According to the TAM, attitude plays a central role. Davis (1989) found that the user attitude is one of the most important predictors of behavioural intention. This is consistent with the underlying social-psychological theories. According to Fishbein and Ajzen’s theory of reasoned action and Ajzen’s theory of planned behaviour, attitude determines the extent to which we are willing to engage in a behaviour. If someone has a favourable attitude towards a technology, they are much more likely to use it (Zou & Huang, 2023). Studies on the acceptance of AI technologies in higher education consistently indicate that attitude is one of the strongest explanatory variables for intention to use. For example, a large-scale survey (943 respondents) in Turkey found that students’ attitudes toward the use of GenAI tools significantly predicted their behavioural intentions, even more strongly than many other factors. According to the same study, although the influence of ‘external’ variables (such as social norms) proved to be weak, personal attitudes carried a significant weight: those who believed more in the value of generative AI and had more positive feelings toward it showed a higher intention to use the tools (Ursavaş et al., 2025). This picture is further reinforced by another empirical finding: In a study conducted with Chinese university students, the positive attitudes and strong intentions of the students were closely related to their actual use of ChatGPT, that is, those who evaluated ChatGPT favourably were more likely to integrate it into their learning processes (Ge, 2024). This observation suggests that attitude is not just an ‘internal’ attitude, but has practical consequences, users with a favourable attitude are more likely to become active users of the technology, while a sceptical or uncertain attitude may discourage use even if the technology is available. Numerous studies in the field of educational technology have reached similar conclusions: attitude is the ‘missing link’ between the objective characteristics of technology and the willingness to use it, linking perceptions and behaviour (Or, 2024). Attitude is a determining factor in shaping behavioural intentions in the use of AI. Attitude also plays a critical role because it mediates the impact of other key variables: According to TAM, for example, perceived usefulness and ease of use first influence attitude, which in turn influences intention (Alfadda & Mahdi, 2021). Therefore, developing a positive attitude towards generative AI tools is strategically important for successful implementation.
H4. 
Attitude (A) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.

2.2.5. The Importance of Enjoyment (E) in AI Adoption

Enjoyment, the feeling of pleasure and amusement during use (Perceived Enjoyment, E), as a psychological factor, is a form of intrinsic motivation in the context of technology acceptance. In the literature, enjoyment is defined as the degree to which the user finds the use of technology enjoyable in itself, regardless of its expected usefulness or any external outcomes (Ursavaş et al., 2025). Venkatesh et al. (2003) pointed out early on that, in addition to perceived usefulness, perceived enjoyment also plays a key role in shaping behavioural intentions (Venkatesh et al., 2003). Enjoyment not only creates a good feeling but also psychologically reinforces the user’s belief that the technology is worth using. Numerous studies have confirmed that computer systems that are considered enjoyable are more likely to be used, as positive experiences are a kind of reward that in itself motivates further use (Van der Heijden, 2004). In the field of educational technologies. Teo and Noyes (2011) also showed that in online learning systems, the enjoyment factor is strongly correlated with continuous use of the system (Teo & Noyes, 2011).
The role of enjoyment in generative AI, particularly in the context of ChatGPT’s application in higher education, is supported by empirical evidence. According to a study conducted among Chinese university students in an English language learning situation, students who found ChatGPT pleasant were significantly more likely to intend to use the system for language learning purposes in the future (Xu & Thien, 2025). This shows that in addition to the mere functionality of AI tools, the user experience is also important. Another study involving Polish university students used the UTAUT2 model to examine the acceptance of ChatGPT and found that hedonic motivation (i.e., pursuit of enjoyment) had a significant positive effect on students’ behavioural intentions (Strzelecki, 2024). Although the impact of this type of enjoyment was smaller than that of performance expectations or habits, it was still significant. Enjoyment is a significant predictor of the acceptance of AI technologies. If students or teachers enjoy using a generative AI tool (such as ChatGPT), this has a positive effect on their attitudes toward the tool and increases their commitment to regular use. Enjoyment also indirectly supports acceptance by improving perceived ease of use and perceived usefulness, creating a more favourable user experience that lays the foundation for long-term adoption (Dahri et al., 2024).
H5. 
Enjoyment (E) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.
Figure 4 presents the theoretical framework of the research, which is based on an enhanced version of technology acceptance models, in particular TAM and UTAUT. The aim of the model is to explore the factors that influence the behavioural intentions of generation Z university students regarding the use of AI. The model focusses on the intention to use, which is examined in terms of the impact of five independent variables. Specifically, we included the constructs Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) from TAM, and Attitude (A) as a mediating factor frequently used in educational research. From UTAUT2, we incorporated Social Influence (SI) and Enjoyment (E), the latter representing hedonic motivation, which has proven especially relevant among Generation Z in voluntary and digitally enriched learning contexts (Strzelecki, 2024; Sergeeva et al., 2025).
Although UTAUT2 includes additional constructs such as Facilitating Conditions, Habit, and Price Value, these were intentionally excluded to keep the model parsimonious and focused. The aim of this study was not to test the full UTAUT2 framework, but rather to examine the most relevant psychological and social factors influencing the acceptance of AI tools in a university learning environment.
The theoretical model was tested using structural equation modelling (PLS-SEM), which allowed the quantification of the effects of individual factors and the verification of their statistical validity. The model fits well with the research objective, which was to explore the psychological and social determinants of student acceptance in an innovative and rapidly evolving technological environment.

3. Materials and Methods

3.1. Research Questions and Hypotheses

To better understand how Generation Z students engage with AI tools in higher education, this study investigates the key factors influencing their acceptance. Despite the increasing relevance of AI, limited research has explored this topic in the Hungarian context. Our aim is to provide actionable insights for institutions planning to integrate AI-based tools into educational practice. In line with the research objectives, the following research questions (RQs) were formulated:
  • (RQ1) What psychological, technological and social factors influence the acceptance of AI-based technologies in higher education?
  • (RQ2) How can university students of Generation Z be segmented based on their AI acceptance attitudes, and what differentiates these segments in terms of behavioural intention and technological openness?
  • (RQ3) To what extent do emotional versus functional factors predict the intention to use AI among university students of Generation Z?
Building on the TAM, UTAUT and UTAUT2, the study formulates five hypotheses. These hypotheses test both functional (e.g., usefulness, ease of use) and emotional (e.g., enjoyment, attitude) predictors of behavioural intention. The model also accounts for the influence of social context and user motivation.
  • H1: Perceived Usefulness (PU) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.
  • H2: Perceived ease of use (PEOU) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among generation Z university students.
  • H3: Social influence (SI) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Gen Z university students.
  • H4: Attitude (A) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.
  • H5: Enjoyment (E) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.

3.2. Data Collection and Demographic Characteristics of the Respondents

Data were collected through a questionnaire survey, targeting Generation Z as the sample, including university students at a Hungarian university. Validated scales were used to construct the questionnaire and to assess the level of acceptance of AI among Hungarian university students. The questionnaire was completed by 302 university students, whose demographic characteristics, such as gender, generation, and education, are shown in Table 1.
Since the target group of this study was specifically Generation Z university students, and the main objective was to explore their attitudes toward AI technologies, only limited demographic data were collected, namely gender and level of education. More detailed demographic segmentation was not the focus of this investigation. However, future studies may benefit from a deeper analysis of demographic characteristics, such as age, field of study, or digital proficiency, to uncover potential subgroup differences.
The study belong exclusively to Generation Z, so the study is entirely in this age group. The gender distribution of the sample shows that 39.2% of the participants are male (118) and 60.8% are female (184), indicating a slight female majority among the respondents. In terms of educational attainment, the majority of participants are students with a high school diploma (77.8%), which is understandable since the study target population focusses on university students. 22.2% have a higher education degree, which may include bachelor or master students.

3.3. Measurement Instruments

The questionnaire contained a total of 47 questions/statements, but in this research study only attitudes related to technology in general, acceptance related to AI, and demographic questions are analysed. Attitude statements were measured on a 5-point Lickert scale. AI acceptance included a total of 6 dimensions, (1) perceived ease of Use, (2) Perceived Usefulness, (3) attitude, (4) Social Influence, (5) behavioural intention to Use, (6) Enjoyment. In total, participants responded to 15 attitude statements in this section of the questionnaire. Table 2 shows the source of the statements and the validated statements.
Table 3 shows attitude statements on the use and application of new technologies. These statements were also assessed on a 5-point Lickert scale, and the aim was to gauge the general attitude of university students towards new technologies in addition to their acceptance of AI. This attitude is presumably closely related to the acceptance of AI.

3.4. Data Analysis

Two different statistical procedures were used to investigate the acceptance of college students of Generation Z toward AI, which is also the methodological novelty of the present investigation. These two methods were partial least-squares structural equation modelling (PLS-SEM) and cluster analysis. This combined approach allows to identify the structural relationships that underlie the adoption of technology while also exploring the different attitudes and behavioural patterns of undergraduates. IBM SPSS software 29.0 and SmartPLS 4.1.1.2 software were used to perform statistical analyses.
The primary objective of PLS-SEM was to statistically confirm technology acceptance theory hypotheses (TAM, UTAUT) and investigate the impact of different factors on AI usage intention. The PLS-SEM model is able to deal with complex relationships with several latent variables and measurement error, thus allowing more reliable conclusions to be drawn in determining the behavioural intention of students. At the same time, cluster analysis allowed to identify different user segments within students. First, hierarchical clustering (Ward’s method, squared Euclidean distance) was used to determine the optimal number of clusters. On the basis of the dendrogram and agglomeration schedule, a four-cluster solution was selected. Then, a K-means clustering procedure with k = 4 was applied to segment the entire sample of 302 respondents into distinct groups based on their attitudes toward AI and technology. The interaction of the two methods provides a unique insight into the dynamics of AI adoption, as it renders not only general correlations, but also patterns discernible that hold differently for different groups of university students.

4. Results

This chapter presents the main results of the questionnaire survey, which is divided into three parts. First, descriptive statistical analysis is used to review the respondents’ perceptions and attitudes toward AI, highlighting the mean values, standard deviation, and variance of each dimension. Second, cluster analysis is used to identify different groups of students that show different patterns of AI adoption. Third, the results of structural equation modelling (PLS-SEM) are presented, which reveal the relationships between the individual variables.

4.1. Descriptive Statistics

Table 4 presents attitudes and intentions to use AI based on descriptive statistical indicators. The mean, median, mode, standard deviation, and variance values in the table provide a more detailed picture of the attitudes towards different aspects of AI.
The results of the descriptive statistical analysis show that students generally have a positive attitude toward AI use, especially PEoU where the mean scores ranged from 3.84 to 3.87 and PU where the mean scores ranged from 3.71 to 3.84. This suggests that most of the respondents see AI as a practical and easy-to-use tool that can help them in their daily activities. Attitude and BI also show favourable scores, suggesting that students are open to using AI, although there is a slight variation in the extent to which they plan to use it. SI received the lowest mean score (2.66–3.23) of the dimensions tested, suggesting that students do not feel strong external pressure or expectation to use AI. This suggests that your acceptance of AI is driven more by personal factors than by the opinions of the people around you and society. Enjoyment (E) shows a moderately high score with a mean value between 3.54 and 3.66, suggesting that although the use of AI can be enjoyable, students view it as a functional tool rather than a fun technology.
The analysis of the variance and dispersion of the data shows that respondents’ opinions are more homogeneous on some dimensions, while in others there are larger differences. The Perceived Ease of Use (PEoU: SD = 0.96–0.97, variance = 0.93–0.94) and Perceived Usefulness (PU: SD = 0.99–1.01, variance = 0.98–1.02) scores show relatively low variance, indicating that most students have similar perceptions of the ease of use and usefulness of AI. On the contrary, there is greater variance for the social influence (SI: SD = 1.20, variance = 1.43) and Behavioural Intention (BI: SD = 1.12–1.19, Variance = 1.25–1.41) variables, suggesting that there are significant differences between students in the extent to which they feel the influence of the social environment and the extent to which they plan to use AI. The variance of the attitude (A: SD = 0.97–1.08, variance = 0.95–1.17) and Enjoyment (E: The SD = 1.10–1.13, variance = 1.21–1.28) dimensions are also relatively high, indicating that students have different ratings of their overall attitudes and enjoyment factors toward AI use. These results suggest that, while the respondents’ perceptions of the usefulness and manageability of AI are relatively consistent, there are larger differences in individual attitudes, social influence, and enjoyment factors.

4.2. Cluster Analysis

Following the descriptive statistical analysis, the results of the cluster analysis are presented to identify different patterns of AI adoption. Statistical analysis has been used to examine four clusters with different levels of technological openness, adoption levels, and willingness to buy.

4.2.1. Cluster 1—Moderately Open Technology Users

The first cluster, named “Moderately Open Technology Users”, includes 72 respondents who are moderately open to new technologies but are not considered to be innovators. Members of this cluster tend to have a positive attitude towards AI but do not show outstanding enthusiasm and openness towards new technologies. They accept the use of AI but do not necessarily consider it indispensable in their daily lives or consider social influences to play a decisive role in their decisions. Table 5 shows the minimum and maximum values, means, and standard deviation for the attitude statements of the first clusters.
The descriptive statistics of the first cluster show that the members of this group are moderately open to new technologies. Among the variables measuring general attitudes towards technology (UNT), the highest mean value is UNT4 (3.74), indicating that the members of the cluster tend to agree that new technologies can have a positive impact on their lives. On the other hand, UNT5 (2.65), which measures the willingness to buy new technology products, has the lowest mean value, indicating that they are less willing to spend on new technologies and do not necessarily feel the need for them in their daily lives. Variables measuring the ease of use of AI (PEoU1 and PEoU2) show medium-high scores, indicating that, in general, the members of the cluster tend to find AI easy to use. Perceptions of AI usefulness (PU1-PU3) are slightly positive but not outstanding (3.14–3.32), indicating that cluster members do not necessarily see AI as an indispensable tool but do see it as useful to some extent. The general attitude towards the use of AI (A1-A3) is mixed. Of the attitudinal variables, A1 (2.92) has the lowest value, indicating that the members of the cluster do not have a clear positive perception of AI use. The values of A2 and A3 are slightly higher, suggesting that some of the cluster perceive some advantage in using AI, but this is not clear to all. Social influence (SI) scores the lowest average of all dimensions (SI1: M = 2.07, SI2: M = 2.78), indicating that the members of the cluster feel little external pressure to adopt AI and are not particularly encouraged by their environment to use it. Variables measuring the willingness to use AI (BI1-BI3) also show relatively low values (M = 2.71–3.03), suggesting that the members of the cluster do not have a clear commitment to actively use AI, although they may consider using it in some cases. The enjoyment factor (E) scores are medium (3.03 and 2.99), suggesting that although they do not find AI use particularly fun, they do not consider it to be particularly boring or unpleasant.

4.2.2. Cluster 2—Positive AI Adopters

The second cluster was named “Positive AI Adopters” and consists of 112 participants. They can be considered to be more open to AI than the first cluster. Overall, the technology is viewed positively, but none of the average values for the dimensions examined exceed 4, with average values ranging from 3.17 to 3.88. In their case, social influences do not play a significant role, but this target group is likely to become active AI users and shows a positive attitude towards AI applications. Table 6 shows the descriptive statistics of the second clusters.
The statistical results of the second group show that the members of this group are generally open to new technologies and AI. Compared to the first cluster, their average scores are slightly higher in all dimensions. Among the variables measuring technological attitudes (UNT), UNT4 was the most agreed upon, with an average score of 4.00, suggesting that, overall, they believe that technological innovations have a positive impact on their lives. However, their willingness to purchase new technological products (UNT5) is quite low, with an average value of 2.66. In terms of the ease of use of AI (PEoU), we see moderately high values, suggesting that the members of this cluster consider AI to be an easy-to-use technology. Perception of AI usefulness (PU) also shows high values, indicating that the group clearly sees the benefits and effectiveness of the AI application. The general attitude toward AI use (A) is also positive, while social influence (SI) is slightly lower, suggesting that although the influence of the social environment is present, AI acceptance is primarily driven by internal motivation. The intention to use AI (BI) shows high values (3.65–3.93), which means that the members are likely to become active AI users in the future. The enjoyment factor (E) is also strong (3.66–3.86), indicating that the group considers AI use not only useful but also enjoyable.

4.2.3. Cluster 3—Sceptical and Cautious Users

The third cluster is the least accepting of AI, which is why they have been labelled “Sceptical and Cautious Users.” This group includes the 36 respondents who are the most moderate and least interested in new technologies. Participants in this group do not really see AI as a useful tool and are less likely to see its positive effects in their daily lives. Although they are willing to use the technology to some extent, their sceptical attitude and low commitment mean that they cannot be considered the most active users. Table 7 shows the descriptive statistics of the third clusters.
In the third cluster, attitudes toward general technology show the highest values, which may suggest that these participants are more dismissive of AI than they are of technology in general. Among the variables measuring technological attitudes (UNT), UNT4 has the highest value, suggesting that members of this group recognise the benefits of technological development to some extent, but are not willing to purchase new technological products (UNT5). Variables that measure AI ease of use (PEoU) show moderately low values, suggesting that group members do not necessarily find AI easy to use. Furthermore, the perception of usefulness is also low, suggesting that participants do not clearly see the benefits of AI applications. The general attitude towards the use of AI (A) also shows low values (M = 2.08–2.25). Average values of social influence (SI) can also be considered low, as they range between 1.64 and 2.17 on a scale of 5. Of the dimensions examined, the lowest average values are for the intention to use (BI), from which we can conclude that these 36 respondents do not intend to use AI-based applications and tools in the future. Also, they do not find them pleasant or entertaining.

4.2.4. Cluster 4—AI Enthusiasts and Innovators

Cluster 4 (AI Enthusiasts and Innovators) comprises a total of 82 individuals who have an overwhelmingly positive attitude toward AI technologies and new innovations. Members of this group demonstrate a high level of technological openness and consider AI not only useful but also a key tool in their daily lives. They not only support the acceptance and application of AI but also actively seek new technological opportunities, making them the most committed and enthusiastic AI users in the sample.
Table 8 shows the results of the fourth group in descending order based on the mean. This cluster is the most open to new technologies and AI, and its statistical results show that the members of the group are exceptionally open. Among the variables measuring technological attitudes (UNT1–UNT6), UNT4 (M = 4.39, SD = 0.750) and UNT6 (M = 4.18, SD = 0.848) show the highest values, suggesting that cluster members are extremely positive about technological developments and feel that AI can greatly contribute to their lives. The willingness to buy new technological products is also outstanding, with an average value of 3.29 compared to the other cluster groups. Variables measuring the ease of use of AI also show the highest values among all groups (M = 4.46 and 4.57), indicating that members find AI extremely simple and intuitive to use. The same can be said about the perception of the usefulness of AI and their general attitudes, as they consider it an effective and necessary technology. The values of social influence (SI1-SI2) (M = 3.29 and 3.90) suggest that their environment also supports the use of AI, but their acceptance is primarily due to internal motivation. The intention to use AI (BI1–BI3) is remarkably high (M = 4.65–4.73), clearly indicating that the members of the cluster are committed to actively using AI. The enjoyment factor (E1–E2) is also the highest among all clusters, indicating that cluster members find the use of AI particularly enjoyable.
Table 9 shows the four different groups that emerged from the cluster analysis based on the acceptance of AI by Generation Z university students. The groups show different attitudes and behavioural characteristics in relation to AI technologies. The first cluster is moderately open to technological innovations, with a slightly positive attitude toward AI, but cannot be considered active or committed users, and their willingness to purchase is low. In contrast, the second cluster is highly open, clearly positive about the acceptance of AI, and with a moderate willingness to purchase, it is likely that they will become active AI users. The third and smallest group is explicitly sceptical and resistant to the use of AI technologies, with minimal willingness to purchase and little perceived social pressure. Finally, the fourth cluster comprises the most innovative and enthusiastic AI users, who are extremely open, actively seek out new technologies, and are strongly influenced by social factors. In general, there are significant differences between students in their willingness to accept AI, their openness to technology, and their sensitivity to social influences. Based on the results, the innovator and positive adopter groups are likely to be active users of future AI technologies, while a separate strategy may be needed to convince the sceptic group.
Table 10 shows the outer loadings of each item within the measurement model across six constructs: Perceived Usefulness, Perceived Ease of Use, Attitude, Social Influence, Enjoyment, and Behavioural Intention. All item loadings exceed the commonly accepted threshold of 0.70, indicating good convergent validity. The strongest loadings appear within the constructs of Enjoyment and Perceived Usefulness, suggesting high internal consistency and reliability of these scales in capturing students’ perceptions toward AI adoption.
Table 11 shows the internal consistency (Cronbach’s alpha and composite reliability) and convergent validity (AVE, explained average variance) of the measurement model for each construct. The Cronbach’s alpha values indicate how reliably the individual items measure the given construct; a value above 0.7 indicates acceptable reliability. Composite reliability (rho_a and rho_c) indicates the internal consistency of the items within the constructs, where values above 0.7 are also considered acceptable. The AVE measures how much variance a given construct explains from the associated measurement items; the minimum acceptable AVE values are generally 0.5. According to the results, most constructs (A, BI, E, PEOU, PU) achieve excellent reliability and validity indices, while in the case of Social Influence (SI), the Cronbach alpha is lower, indicating that this construct may need further refinement in future research.

4.2.5. Evaluation

In this section, the results of the PLS-SEM used in the investigation are presented. The purpose of the model is to reveal the strength of the relationships between the constructs examined and the direct impact of the factors determining the acceptance of AI technology on the intention to behave. Figure 4 shows the relationships between the latent variables and their strengths (path coefficients), as well as the factor loadings associated with each indicator. We also indicate the explanatory power of the model (R2 value), which shows to what extent the factors examined explain the change in the intention of students to accept AI.
Figure 5 presents the results of the structural model, in which we examined the relationships between the latent variables under investigation and their strengths in the context of AI acceptance. The left side of the figure shows the constructs (e.g., PU) for which the related indicators (e.g., PU1, PU2, PU3) show high factor loadings in the range of 0.827 to 0.966. These measures the given constructs well, so the measurement model is adequately validated and reliable.
The path coefficients shown in the central part of the model indicate the direct effect of each construct on BI. The strongest positive effect is shown by Enjoyment (β = 0.605), indicating that enjoyment of using AI technology is a key factor in shaping university students’ intention to use it. This is followed by the Perceived Usefulness construct (β = 0.167), which also significantly supports the acceptance of AI. The Attitude (β = 0.100), perceived ease of Use (β = 0.033) and Social Influence (β = 0.111) constructs show a more moderate, although statistically significant, relationship with intention of use, suggesting that although these are also relevant factors, they are less decisive in acceptance of AI.
The explanatory power of the behavioural intention construct (R2 = 0.814) is exceptionally high, which means that the factors included in the model explain the development of intentions of AI usage very well. Based on the results obtained, it is important to emphasise for future research that highlighting the enjoyment value of use and demonstrating the practical benefits of AI play a particularly important role. Less influential factors, such as social influence or ease of use, may be worth exploring in more depth in future research to better understand how they can support the widespread acceptance of AI technologies among students.
Table 12 shows the variance inflation factors (VIF) used to check multicollinearity for the constructs examined. The VIF values can be used to examine whether there is a strong correlation between the independent variables in the model, which could distort the regression results. The generally accepted threshold value for VIF is 5; values below this indicate that there are no serious multicollinearity problems between the constructs. Based on the results, all VIF values for the examined constructs (perceived usefulness, perceived ease of Use, Attitude, Social Influence, and Enjoyment) remained below the accepted threshold (1.65–2.12), so it can be concluded that the model is reliable and there is no significant multicollinearity bias between the latent variables.

4.2.6. Correlation

Table 13 illustrates the correlation between the latent variables examined. Based on the values obtained, there is a strong positive correlation between behavioural intention and enjoyment (0.877), suggesting that the enjoyment value of using AI tools is closely related to university students’ intention to actually use them. Perceived Usefulness (0.744) and Attitude (0.752) also show a significant relationship with behavioural intention, confirming that these factors greatly influence intention to use AI. Social influence shows a moderately strong relationship with behavioural intention (0.615), while perceived ease of use shows a weaker, moderate relationship (0.490). Comparing the results of the PLS model, we see that although the correlation matrix shows strong relationships between several factors and the intention to use, the structural model differentiated the strength of the effects more accurately. The high correlation of the enjoyment factor is consistent with the strong path coefficient of the PLS model (β = 0.605), which highlights the primary role of this construct. However, despite the moderate correlation of the social influence factor, it has a weak direct effect on the PLS model (β = 0.111), indicating that although there is a general relationship, the direct influence of the construct on the intention is limited.

5. Discussion

Research focusing on the attitudes and behavioural intentions of Generation Z university students offers valuable insights for the more effective integration of AI tools in higher education. The relevance of this topic is underscored by the growing number of Hungarian universities that are developing institutional guidelines and training initiatives around AI, while students themselves are already creatively incorporating tools like ChatGPT into their everyday learning routines.
  • Research questions
  • (RQ1) What psychological, technological, and social factors influence the acceptance of AI-based technologies in higher education?
In this context, it is particularly important to explore the factors that encourage or discourage young people’s use of AI, as their attitudes and experiences fundamentally determine the educational integration of innovative tools. The results show that the acceptance of AI tools is primarily shaped by their own perceptions and experiences. Among psychological factors, students’ general attitude towards AI stands out: those who have a favourable, open attitude towards the technology also have a much higher intention to use it. This is consistent with the basic tenets of technology acceptance models (e.g., TAM, TPB), which state that positive attitudes increase behavioural intention. Another psychological factor is the level of intrinsic motivation or enjoyment: if students find using AI enjoyable, this reinforces their positive attitude and motivation towards the tool. In our research, we found that the experience factor of AI use contributes significantly to willingness, indicating that for young users, it is not only what the technology can do that matters, but also how it feels to use it.
  • (RQ2) How can university students of Generation Z be segmented based on their AI acceptance attitudes, and what differentiates these segments in terms of behavioural intention and technological openness?
The cluster analysis based on attitudes revealed four different student groups, indicating that generation Z university students do not have a uniform attitude toward AI. At one extreme are the “AI enthusiasts and innovators,” who are extremely open and enthusiastic about new technologies. They are characterised by very positive attitudes and high usage intentions: They are committed to using AI tools in their studies in the future. For them, the use of AI is almost self-evident, a desirable innovation, and their technological openness is outstanding. At the other end of a spectrum, there is the group of “sceptical and cautious users” with a low level of acceptance. They are mostly distrustful or uncertain about AI tools, and their attitude is rather negative or hesitant, manifesting itself in weak behavioural intentions; they do not plan to actively use these tools. Their technological openness is also limited: many of them would only use AI if they had to, or if its use was completely simple and risk-free. Two intermediate segments can also be identified between the two extreme groups. These four segments are clearly distinct in their behavioural intentions and degree of technological openness. The results also statistically confirm that there are significant differences between the groups in this area. All this suggests that the student population is not homogeneous. There are different attitude groups that need to be addressed and supported in different ways during the integration of AI.
  • (RQ3) To what extent do emotional versus functional factors predict the intention to use AI among university students of Generation Z?
The third question of our research focused on the role played by emotional (for example, experiential) and functional (for example, utility) factors in the intention to use technology (RQ3). Our results show that both factors are significant, but the influence of emotional motives is particularly strong. Specifically, structural modelling showed that “Perceived Enjoyment” is the strongest predictor of usage intention, even preceding the effect of usefulness. If students experience using an AI tool as an enjoyable experience and a motivating process, they are more likely to continue using it than if they simply know that the tool is useful. Of course, functional benefits remain important: students need to see concrete advantages (such as better grades, time savings, and more efficient learning) in order to commit to a technology in the long term. This finding is consistent with some of the results of recent international literature. For example, UTAUT2-based research in Russia examining the adoption of generative AI showed that hedonic motivation (the enjoyment factor) significantly increased students’ intention to use AI (Sergeeva et al., 2025). Similarly, in a survey of Chinese university students, those who found ChatGPT entertaining were much more likely to plan to use the system for language learning purposes in the future (Xu & Thien, 2025).
All five hypotheses were supported by the PLS-SEM analysis.
H1. 
Perceived Usefulness (PU) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.
The results confirm that perceived usefulness has a significant positive effect on the behavioural intention to use AI tools, supporting H1. This finding is consistent with previous studies that highlight that when students recognise the academic value and functional utility of AI, they are more likely to adopt it (Chan & Hu, 2023). In our model, usefulness emerged as one of the strongest predictors of intention, underscoring the importance of task-relevant benefits in the context of learning technologies.
H2. 
Perceived ease of use (PEOU) has a positive effect on Behavioural Intention to Use (BI) of AI technology among generation Z university students.
Although perceived ease of use did not have a strong direct effect on behavioural intention, it significantly influenced attitude, which, in turn, impacted intention. This mediating effect suggests that Generation Z students are more likely to adopt AI tools when they perceive them as intuitive and simple to use—an effect that operates primarily through the formation of positive attitudes. This supports earlier TAM-based research emphasising usability as a key antecedent of attitude (Venkatesh & Davis, 2000).
H3. 
Social influence (SI) has a positive effect on Behavioural Intention to Use (BI) of AI technology among Gen Z university students.
Although social influence was found to be a statistically significant predictor of behavioural intention, its effect size was relatively weak (β = 0.111). This suggests that while peer norms, instructor attitudes, and the general academic environment may play a role in shaping students’ decisions to adopt AI tools, these factors are less influential compared to internal motivations such as enjoyment or perceived usefulness. It indicates that generation Z students may rely more on personal evaluations than on external expectations when it comes to adopting emerging technologies like generative AI.
H4. 
Attitude (A) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.
Attitude toward using AI tools was found to have a significant positive effect on behavioural intention, confirming H4. This indicates that students who feel positively about AI are more inclined to use it in their academic routines. This finding reinforces the role of affective evaluations in technology acceptance models, particularly in voluntary learning environments, where internal motivation plays a crucial role.
H5. 
Enjoyment (E) has a positive effect on the Behavioural Intention to Use (BI) of AI technology among Generation Z university students.
H5 was strongly supported. Enjoyment, which represents hedonic motivation, had the highest standardised path coefficient in the model (β = 0.605), making it the most powerful predictor of behavioural intention. This finding underscores that emotional engagement and user experience are central to the adoption of AI tools among Generation Z, who are known to value interactivity, gamification, and personalisation in digital environments (Strzelecki, 2024).
Our results are consistent with findings in the international literature, particularly in emphasising the role of enjoyment and attitude. Xu and Thien (2025) studied Chinese university students and found that perceived enjoyment plays a mediating role in the formation of usage intention and that social influence also had a positive effect on willingness to use. This confirms that emotional factors and the social environment can have a significant influence on the acceptance of AI tools, especially in a language learning context. Similarly, Strzelecki (2024) found that hedonic motivation has a positive effect on the intention to use ChatGPT, while the effect of social influence proved to be negligible. This is in line with our findings, where enjoyment was also the strongest predictor, while the role of social influence appeared to be relatively weaker. Other studies also emphasise usefulness (PU), ease of use (PEOU) and their indirect impact on intention to use. Zou and Huang (2023) showed in their research among doctoral students that perceived usefulness and ease of use indirectly influence the use of ChatGPT through attitude, as we also found in our research. At the same time, Hoi’s (2020) study on the use of mobile technologies for language learning also highlights the importance of usability and experientiality. Based on this, the responses of Hungarian Generation Z students are well aligned with international trends, but they also emphasise immediate enjoyment as a driving force for acceptance, which may also refer to cultural characteristics.
The findings highlighted enjoyment and attitude as the strongest predictors of behavioural intention, while perceived usefulness and social influence also played a significant role. In contrast, Habibi et al. (2023), using UTAUT2 with Indonesian university students, found that facilitating conditions were the most important driver of behavioural intention, while effort expectancy was not significant. Although perceived usefulness was included, it did not emerge as the strongest factor, suggesting contextual or infrastructural differences. Similarly, Barakat et al. (2025), who focused on university educators, found that anxiety and perceived risk had a considerable impact alongside perceived usefulness and social influence. These contrasts suggest that while usefulness and social influence are widely relevant, their impact varies based on user role, institutional maturity, and local digital readiness.
In summary, the data analysis followed three main directions. First, the structural model (PLS-SEM) examined the effects of six theoretical constructs on the intention of behavioural use AI. Second, the cluster analysis segmented the students into four groups based on their openness to technology, level of adoption, and willingness to pay. Third, correlation analysis was used to explore the strength and direction of the relationships between the variables examined. Together, these complementary methods provided a comprehensive understanding of AI acceptance among Hungarian university students.

6. Limitations

When interpreting the results of the study, several limitations must be taken into account. First, this is a cross-sectional study: data were collected at a specific point in time, reflecting the opinions of students at that time. We were unable to examine changes in attitudes and intentions over time, nor to examine how longer-term experience with AI tools shapes usage patterns. Second, the sample is specific in terms of geography and culture: it focused on students at a Hungarian university. The cultural context and institutional background may influence the results, so caution should be exercised when generalising them. Third, data collection was carried out with a self-reported questionnaire, which has the potential for bias. Respondents may have responded in a socially desirable manner (for example, reporting their attitudes as better than they actually are) or may not have been able to accurately assess their own future behaviour. Furthermore, although our model covered the main psychological and technological factors, it did not cover all possible influencing factors. For example, we did not examine the level of trust in AI, risk perception, or ethical concerns, although these may also influence acceptance according to the literature. As a result, it is difficult to directly compare the strength of each predictor’s influence on overall acceptance; rather, only the relationships between the examined variables within this model can be interpreted and contrasted.

7. Conclusions

The aim of this research was to explore the factors influencing Generation Z university students’ attitudes toward AI technologies and, overall, how open and accepting they are toward them. During questionnaire-based research, we analysed the responses of 302 students, measuring user attitudes and behavioural intentions across multiple dimensions, such as perceived usefulness, usability, experience, attitude, and social influence. Two main methodologies were used to analyse the questionnaire: cluster analysis and PLS-SEM. The cluster analysis identified four distinct student groups, with the group ‘Sceptical and Cautious Users’ showing low acceptance, while the group ‘AI Enthusiasts and Innovators’ is strongly committed to using technology. This suggests that attitudes toward AI vary significantly between student segments, which justifies differentiated education policy and communication strategies. The results showed that among the six factors examined, enjoyment had the strongest and most significant effect on behavioural intention, confirming the importance of hedonic motivation in voluntary use of educational technology. Perceived usefulness also had a positive and significant effect, indicating that students value practical benefits. Attitude and social influence contributed to acceptance to a lesser extent but were still statistically relevant. On the contrary, perceived ease of use showed only a marginal impact, suggesting that Generation Z students, being digital natives, do not consider usability a key barrier to adoption. Behavioural intention, as the outcome variable, was shaped most by emotional and utility-based perceptions, reinforcing the importance of student-centred design in AI-based educational tools.
The significance of the research lies in its ability to simultaneously identify different student attitude groups and quantitatively explore the main factors that influence acceptance using a combined methodological approach. The study offers a new perspective on the integration of AI into education and can help universities develop segment-based intervention strategies. Future research should include longitudinal studies to explore how students’ attitudes toward AI technologies change over time and as the technology evolves. In addition, including additional psychological and sociological factors, such as trust, data security, and identity, could help provide a more complete understanding of the acceptance of technology. The inclusion of other higher education contexts and countries would also allow for the generalisation of results and the exploration of cultural differences.
Based on the results, the acceptance of AI is most influenced by enjoyment and positive attitudes, so developers and educational institutions should offer user-friendly, experience-based AI solutions that are not only useful but also motivating for students. Decision makers should support training related to AI use, which reinforces positive attitudes and reduces resistance arising from a lack of knowledge. It is important to take student feedback into account when integrating AI tools, especially with respect to interactivity and usability. Marketing and communication strategies should emphasise the experience-based benefits of technology. Finally, ensuring ethical and transparent operations is the key to maintaining long-term acceptance.
Based on the findings and limitations identified, several future research directions emerge. One of the most important is the implementation of longitudinal studies. It would be worthwhile to follow the same students over a longer period of time to find out how their attitudes and usage habits change in relation to AI tools, for example, over a semester or during their university years. Second, it would be worthwhile to expand the range of factors examined in the future. Our research focused on the most important components of TAM and UTAUT (usefulness, ease of use, attitude, social influence, enjoyment), but did not cover the full spectrum of psychological and ethical factors related to technology. Future studies could include variables such as trust in AI, cybersecurity/data privacy concerns, technological anxiety, or user competence. Finally, further research should also expand the methodology. Quantitative analysis could be combined with a qualitative approach, such as interviews or focus groups, to gain a deeper understanding of the motivations and concerns. A more detailed description of the clusters we identified would help us understand exactly why a “sceptical” group is distrustful: this could be due to bad experiences, lack of information, or ethical reservations.
Due to the uncertainty among students, higher education institutions develop clear and consistent guidelines on the use of AI tools, especially generative AI such as ChatGPT. These guidelines should include ethical frameworks, allowed uses, and rules for referencing AI-generated content. Additionally, policymakers should encourage the integration of AI into the curriculum, for example, through pilot programmes or credit-bearing courses. Teacher training must also respond to new technological challenges, ensuring that educators have the necessary competencies to apply AI in education. In the long term, it is necessary to create an innovation-friendly higher education environment in which the responsible, regulated, and value-creating use of AI becomes the norm. Overall, the study highlights that the adoption of AI technologies is a complex, multifactorial process and that different types of students require different approaches. Addressing the practical, emotional, and social dimensions together is essential for AI tools to become a natural and valuable part of education.

Author Contributions

Conceptualization, R.K. and B.E.B.; methodology, R.K.; software, R.K.; validation, B.E.B. and R.K.; formal analysis, R.K.; investigation, R.K.; resources, B.E.B.; data curation, R.K.; writing—original draft preparation, R.K.; writing—review and editing, B.E.B.; visualisation, R.K.; supervision, B.E.B.; project administration, B.E.B.; funding acquisition, B.E.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Science Ethics Committee of the Scientific Advisory Board, Széchenyi István University (Approval Code: Nr. SZE/ETT-44/2025 (VII.28.)) Approval Date: 28 July 2025).

Informed Consent Statement

The data collection was conducted via an online questionnaire, which was anonymous and did not involve the collection of any personal or identifying information from participants. Therefore, no signed informed consent forms were obtained, as the survey design did not require participants to provide names, contact details, or any other data that could lead to individual identification.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Supported by the EKÖP-24-3-1 university research fellowship program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund (R.K.). This research was supported by the Széchenyi István University Foundation (B.E.B.).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
GenAIGenerative Artificial Intelligence
TAMTechnology Acceptance Model
UTAUTUnified Theory of Acceptance and Use of Technology
UTAUT2Extended Unified Theory of Acceptance and Use of Technology
PLS-SEMPartial Least Squares Structural Equation Modelling
TRATheory of Reasoned Action
TPBTheory of Planned Behaviour
IoTInternet of Things
AIDUAAI Decision Use and Acceptance model
T-AIATechnology–Artificial Intelligence Acceptance model
VIFVariance Inflation Factor
AVEAverage Variance Extracted
PUPerceived Usefulness
PEOUPerceived Ease of Use
BIBehavioural Intention (to Use)
SISocial Influence
AAttitude
EEnjoyment

References

  1. Agarwal, R., & Karahanna, E. (2000). Time flies when you’re having fun: Cognitive absorption and beliefs about information technology usage. MIS Quarterly, 24, 665–694. [Google Scholar] [CrossRef]
  2. AIPRM. (2023). AI in education: Key statistics and trends. Available online: https://www.aiprm.com/ai-in-education-statistics/ (accessed on 7 April 2025).
  3. Al Giffari, H. A., Mayukh, N., Najib, N. A. M., Ma, X. C., & Ruslan, N. (2023). Exploring Iium Gombak students’perception and actual use of ChatGPT for educational purposes: A quantitative study. Journal of Communication Education, 2, 37–58. [Google Scholar]
  4. Al-Abdullatif, A. M., & Alsubaie, M. A. (2024). ChatGPT in learning: Assessing students’ use intentions through the lens of perceived value and the influence of AI literacy. Behavioral Sciences, 14(9), 845. [Google Scholar] [CrossRef] [PubMed]
  5. Alfadda, H. A., & Mahdi, H. S. (2021). Measuring students’ use of zoom application in language course based on the technology acceptance model (TAM). Journal of Psycholinguistic Research, 50(4), 883–900. [Google Scholar] [CrossRef] [PubMed]
  6. Aljohani, W. (2024). ChatGPT for EFL: Usage and Perceptions among BA and MA Students. Open Journal of Modern Linguistics, 14(6), 1119–1139. [Google Scholar] [CrossRef]
  7. Almassaad, A., Alajlan, H., & Alebaikan, R. (2024). Student perceptions of generative artificial intelligence: Investigating utilization, benefits, and challenges in higher education. Systems, 12(10), 385. [Google Scholar] [CrossRef]
  8. Almogren, A. S., Al-Rahmi, W. M., & Dahri, N. A. (2024). Exploring factors influencing the acceptance of ChatGPT in higher education: A smart education perspective. Heliyon, 10(11), e31887. [Google Scholar] [CrossRef]
  9. Bakkabulindi, F. E. K. (2014). A call for return to Rogers’ innovation diffusion theory. Makerere Journal of Higher Education, 6(1), 55–85. [Google Scholar] [CrossRef]
  10. Balaskas, S., Tsiantos, V., Chatzifotiou, S., & Rigou, M. (2025). Determinants of ChatGPT adoption intention in higher education: Expanding on TAM with the mediating roles of trust and risk. Information, 16(2), 82. [Google Scholar] [CrossRef]
  11. Barakat, M., Salim, N. A., & Sallam, M. (2025). University educators perspectives on ChatGPT: A technology acceptance model-based study. Open Praxis, 17(1), 129–144. [Google Scholar] [CrossRef]
  12. Basch, C., Hillyer, G., Gold, B., & Yousaf, H. (2025). Understanding artificial intelligence knowledge and usage among college students: Insights from a survey on classroom, coursework, and personal applications. Review of Artificial Intelligence in Education, 6, e037. [Google Scholar] [CrossRef]
  13. Boussouf, Z., Amrani, H., Khal, M. Z., & Daidai, F. (2024). Artificial intelligence in education: A systematic literature review. Data and Metadata, 3, 288. [Google Scholar] [CrossRef]
  14. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  15. Chardonnens, S. (2025). Adapting educational practices for Generation Z: Integrating metacognitive strategies and artificial intelligence. In Frontiers in education (Vol. 10, p. 1504726). Frontiers Media SA. [Google Scholar] [CrossRef]
  16. Dabis, A., & Csáki, C. (2024). AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI. Humanities and Social Sciences Communications, 11(1), 1–13. [Google Scholar] [CrossRef]
  17. Dahri, N. A., Yahaya, N., Al-Rahmi, W. M., Aldraiweesh, A., Alturki, U., Almutairy, S., Shutaleva, A., & Soomro, R. B. (2024). Extended TAM based acceptance of AI-Powered ChatGPT for supporting metacognitive self-regulated learning in education: A mixed-methods study. Heliyon, 10(8), e29317. [Google Scholar] [CrossRef]
  18. Daskalaki, E., Psaroudaki, K., & Fragopoulou, P. (2024). Navigating the future of education: Educators’ insights on AI integration and challenges in Greece, Hungary, Latvia, Ireland and Armenia. arXiv, arXiv:2408.15686. [Google Scholar] [CrossRef]
  19. Davis, F. D. (1989). Technology acceptance model: TAM. In Al-Suqri, MN, Al-Aufi, AS: Information seeking behavior and technology adoption (pp. 205–219). IGI Global. [Google Scholar]
  20. Eurydice. (2025). Review of the role of artificial intelligence in higher education in Hungary. European Commission Eurydice. Available online: https://eurydice.eacea.ec.europa.eu/news/hungary-review-role-artificial-intelligence-higher-education (accessed on 6 August 2025).
  21. Ge, T. (2024). Assessing the acceptance and utilization of ChatGPT by Chinese university students in English writing education. International Journal of Learning and Teaching, 10(1), 166–170. [Google Scholar] [CrossRef]
  22. Granić, A., & Marangunić, N. (2019). Technology acceptance model in educational context: A systematic literature review. British Journal of Educational Technology, 50(5), 2572–2593. [Google Scholar] [CrossRef]
  23. Gupta, V., & Yang, H. (2024). Study protocol for factors influencing the adoption of ChatGPT technology by startups: Perceptions and attitudes of entrepreneurs. PLoS ONE, 19(2), e0298427. [Google Scholar] [CrossRef]
  24. Habibi, A., Muhaimin, M., Danibao, B. K., Wibowo, Y. G., Wahyuni, S., & Octavia, A. (2023). ChatGPT in higher education learning: Acceptance and use. Computers and Education: Artificial Intelligence, 5, 100190. [Google Scholar] [CrossRef]
  25. Hoi, V. N. (2020). Understanding higher education learners’ acceptance and use of mobile devices for language learning: A Rasch-based path modeling approach. Computers & Education, 146, 103761. [Google Scholar] [CrossRef]
  26. Kelly, S., Kaye, S. A., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics, 77, 101925. [Google Scholar] [CrossRef]
  27. Koteczki, R., Csikor, D., & Balassa, B. E. (2025). The role of generative AI in improving the sustainability and efficiency of HR recruitment process. Discover Sustainability, 6(1), 1–28. [Google Scholar] [CrossRef]
  28. Lee, C. S., Tan, L. E., & Goh, D. H. L. (2025). Examining generation Z’s use of generative AI from an affordance-based approach. Information Research an International Electronic Journal, 30(iConf), 1095–1102. [Google Scholar] [CrossRef]
  29. Lo, C. K., Hew, K. F., & Jong, M. S. Y. (2024). The influence of ChatGPT on student engagement: A systematic review and future research agenda. Computers & Education, 219, 105100. [Google Scholar] [CrossRef]
  30. Manresa, A., Sammour, A., Mas-Machuca, M., Chen, W., & Botchie, D. (2024). Humanizing GenAI at work: Bridging the gap between technological innovation and employee engagement. Journal of Managerial Psychology, 40, 472–492. [Google Scholar] [CrossRef]
  31. Masadeh, S. A., & El-Haggar, N. (2024). Analyzing factors influencing IoT adoption in higher educational institutions in Saudi Arabia using a modified TAM model. Education and Information Technologies, 29(5), 6407–6441. [Google Scholar] [CrossRef]
  32. Menon, D., & Shilpa, K. (2023). “Chatting with ChatGPT”: Analyzing the factors influencing users’ intention to use the open AI’s ChatGPT using the UTAUT model. Heliyon, 9(11), e1914. [Google Scholar] [CrossRef]
  33. Mogaji, E., Viglia, G., Srivastava, P., & Dwivedi, Y. K. (2024). Is it the end of the technology acceptance model in the era of generative artificial intelligence? International Journal of Contemporary Hospitality Management, 36(10), 3324–3339. [Google Scholar] [CrossRef]
  34. Mustafa, M. Y., Tlili, A., Lampropoulos, G., Huang, R., Jandrić, P., Zhao, J., Salha, S., Xu, L., Panda, S., Kinshuk, López-Pernas, S., & Saqr, M. (2024). A systematic review of literature reviews on artificial intelligence in education (AIED): A roadmap to a future research agenda. Smart Learning Environments, 11(1), 1–33. [Google Scholar] [CrossRef]
  35. Or, C. (2024). Watch that attitude! Examining the role of attitude in the technology acceptance model through meta-analytic structural equation modelling. International Journal of Technology in Education and Science, 8(4), 558–582. [Google Scholar] [CrossRef]
  36. Panagiotopoulos, I., & Dimitrakopoulos, G. (2018). An empirical investigation on consumers’ intentions towards autonomous driving. Transportation Research Part C: Emerging Technologies, 95, 773–784. [Google Scholar] [CrossRef]
  37. Pitts, G., Marcus, V., & Motamedi, S. (2025). Student perspectives on the benefits and risks of AI in education. arXiv, arXiv:2505.02198. [Google Scholar] [CrossRef]
  38. Rahman, M. M., Lesch, M. F., Horrey, W. J., & Strawderman, L. (2017). Assessing the utility of TAM, TPB, and UTAUT for advanced driver assistance systems. Accident Analysis & Prevention, 108, 361–373. [Google Scholar] [CrossRef]
  39. Sergeeva, O. V., Zheltukhina, M. R., Shoustikova, T., Tukhvatullina, L. R., Dobrokhotov, D. A., & Kondrashev, S. V. (2025). Understanding higher education students’ adoption of generative AI technologies: An empirical investigation using UTAUT2. Contemporary Educational Technology, 17(2), ep571. [Google Scholar] [CrossRef]
  40. Shahzad, M. F., Xu, S., & Javed, I. (2024). ChatGPT awareness, acceptance, and adoption in higher education: The role of trust as a cornerstone. International Journal of Educational Technology in Higher Education, 21(1), 46. [Google Scholar] [CrossRef]
  41. Singla, A., Sukharevsky, A., Yee, L., Chui, M., & Hall, B. (2024). The state of AI in early 2024. QuantumBlack, AI by McKinsey and McKinsey Digital. Available online: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-2024 (accessed on 7 April 2025).
  42. Sohn, K., & Kwon, O. (2020). Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telematics and Informatics, 47, 101324. [Google Scholar] [CrossRef]
  43. Solomovich, L., & Abraham, V. (2024). Exploring the influence of ChatGPT on tourism behavior using the technology acceptance model. Tourism Review. ahead-of-print. [Google Scholar] [CrossRef]
  44. Statista. (2024a). Machine learning market size worldwide from 2020 to 2030. Available online: https://www.statista.com/forecasts/1449854/machine-learning-market-size-worldwide (accessed on 7 April 2025).
  45. Statista. (2024b). Share of organizations worldwide that have adopted artificial intelligence (AI) as of 2023. Available online: https://www.statista.com/statistics/1545783/ai-adoption-among-organizations-worldwide/ (accessed on 7 April 2025).
  46. Statista. (2024c). Opinion on artificial intelligence (AI) having a positive or negative effect on education in the United States as of March 2023. Available online: https://www.statista.com/statistics/1461799/opinion-ai-effect-education-united-states/ (accessed on 7 April 2025).
  47. Strzelecki, A. (2024). Students’ acceptance of ChatGPT in higher education: An extended unified theory of acceptance and use of technology. Innovative Higher Education, 49(2), 223–245. [Google Scholar] [CrossRef]
  48. Teo, T., & Noyes, J. (2011). An assessment of the influence of perceived enjoyment and attitude on the intention to use technology among pre-service teachers: A structural equation modeling approach. Computers & Education, 57(2), 1645–1653. [Google Scholar] [CrossRef]
  49. Ursavaş, Ö. F., Yalçın, Y., İslamoğlu, H., Bakır-Yalçın, E., & Cukurova, M. (2025). Rethinking the importance of social norms in generative AI adoption: Investigating the acceptance and use of generative AI among higher education students. International Journal of Educational Technology in Higher Education, 22(1), 1–22. [Google Scholar] [CrossRef]
  50. Van der Heijden, H. (2004). User acceptance of hedonic information systems. MIS Quarterly, 28, 695–704. [Google Scholar] [CrossRef]
  51. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. [Google Scholar] [CrossRef]
  52. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27, 425–478. [Google Scholar] [CrossRef]
  53. Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36, 157–178. [Google Scholar] [CrossRef]
  54. Vieriu, A. M., & Petrea, G. (2025). The impact of artificial intelligence (AI) on students’ academic development. Education Sciences, 15(3), 343. [Google Scholar] [CrossRef]
  55. Wang, C., Ahmad, S. F., Ayassrah, A. Y. B. A., Awwad, E. M., Irshad, M., Ali, Y. A., Al-Razgan, M., Khan, Y., & Han, H. (2023). An empirical evaluation of technology acceptance model for Artificial Intelligence in E-commerce. Heliyon, 9(8), e18349. [Google Scholar] [CrossRef]
  56. Williams, M. D., Rana, N. P., & Dwivedi, Y. K. (2015). The unified theory of acceptance and use of technology (UTAUT): A literature review. Journal of Enterprise Information Management, 28(3), 443–488. [Google Scholar] [CrossRef]
  57. Wu, C., Zhang, H., & Carroll, J. M. (2024). AI governance in higher education: Case studies of guidance at big ten universities. arXiv, arXiv:2409.02017. [Google Scholar] [CrossRef]
  58. Xu, X., & Thien, L. M. (2025). Unleashing the power of perceived enjoyment: Exploring Chinese undergraduate EFL learners’ intention to use ChatGPT for English learning. Journal of Applied Research in Higher Education, 17(2), 578–593. [Google Scholar] [CrossRef]
  59. Zou, M., & Huang, L. (2023). To use or not to use? Understanding doctoral students’ acceptance of ChatGPT in writing through technology acceptance model. Frontiers in Psychology, 14, 1259531. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Technology Acceptance Model (TAM).
Figure 1. Technology Acceptance Model (TAM).
Education 15 01044 g001
Figure 2. Unified Theory of Acceptance and Use of Technology (UTAUT).
Figure 2. Unified Theory of Acceptance and Use of Technology (UTAUT).
Education 15 01044 g002
Figure 3. Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). Solid arrows represent hypothesised direct effects between latent variables in the core model. Dashed arrows indicate the moderating effects of gender, age, and experience on these relationships.
Figure 3. Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). Solid arrows represent hypothesised direct effects between latent variables in the core model. Dashed arrows indicate the moderating effects of gender, age, and experience on these relationships.
Education 15 01044 g003
Figure 4. Theoretical Framework for the Hypothesises.
Figure 4. Theoretical Framework for the Hypothesises.
Education 15 01044 g004
Figure 5. PLS-SEM structural model. The blue circles represent latent constructs (PU, PEOU, A, SI, E, BI). The yellow boxes indicate the observed indicators associated with each construct. The numerical values represent standardized outer loadings (for indicators) and path coefficients (between constructs).
Figure 5. PLS-SEM structural model. The blue circles represent latent constructs (PU, PEOU, A, SI, E, BI). The yellow boxes indicate the observed indicators associated with each construct. The numerical values represent standardized outer loadings (for indicators) and path coefficients (between constructs).
Education 15 01044 g005
Table 1. Demographic data of the participants.
Table 1. Demographic data of the participants.
GenderNumber%
  • Male
11839.2%
  • Female
18460.8%
Generation
  • Generation Z
302100.0%
Education
  • Secondary school leaving certificate
23577.8%
  • University degree (Bsc, Msc)
6722.2%
Table 2. Attitude statements in the survey related to AI acceptance.
Table 2. Attitude statements in the survey related to AI acceptance.
No.Measurement ItemsReferences
Perceived Ease of UsePEoU1Using the AI product would be easy.Sohn and Kwon (2020); Venkatesh and Davis (2000)
PEoU2Interaction with the AI product would be clear and understandable
Perceived UsefulnessPU1Using the AI product would improve my daily work performanceSohn and Kwon (2020); Venkatesh and Davis (2000)
PU2Using the AI product would improve efficiency in my daily work.
PU3I would find the AI product useful in my daily work
AttitudeA1The use of the AI product in everyday life would be goodSohn and Kwon (2020); Rahman et al. (2017)
A2Use of the AI product in everyday life would be useful.
A3The use of the AI product in everyday life would be effective.
Social InfluenceSI1People who influence my behaviour would think that I should use the AI productSohn and Kwon (2020); Venkatesh et al. (2012)
SI2People around me will take a positive view of me using the AI product
Behavioral Intention to UseBI1I intend to use the AI product in the futureSohn and Kwon (2020); Rahman et al. (2017)
BI2I intend to use the AI product frequently
BI3I intend to recommend that other people use the AI product.
EnjoymentE1I would enjoy using the AI productAgarwal and Karahanna (2000)
E2I would have fun interacting with the AI product
Table 3. Attitude statements on the acceptance of new technologies.
Table 3. Attitude statements on the acceptance of new technologies.
DimensionNo.Measurement ItemsReferences
Using or adopting new technologiesUNT1It is important to keep up with the latest trends in new technology.(Panagiotopoulos & Dimitrakopoulos, 2018).
UNT2New technology causes people to waste too much time.
UNT3New technology makes life more complicated
UNT4New technology will provide solutions to many of our problems.
UNT5I often buy new technology products, even if they are expensive.
UNT6Science and technology are making our lives healthier, easier, and more comfortable.
Table 4. Descriptive statistics.
Table 4. Descriptive statistics.
N ValidMeanMedianModeStd. DeviationVariance
PEoU13023.87440.960.93
PEoU23023.84440.970.94
PU13023.71440.990.98
PU23023.74441.011.01
PU33023.84441.011.02
A13023.62441.081.17
A23023.79440.970.95
A33023.70441.000.99
SI13022.66331.201.43
SI23023.23330.970.95
BI13023.69441.121.25
BI23023.47441.191.41
BI33023.47441.191.41
E13023.66441.101.21
E23023.54441.131.28
Table 5. Cluster 1.
Table 5. Cluster 1.
Descriptive Statistics of Cluster 1
NMinimumMaximumMeanStd. Deviation
UNT472153.740.86
PEoU172153.640.92
UNT172153.580.95
PEoU272153.540.89
UNT372153.470.90
UNT272153.351.06
PU372253.320.67
UNT672153.250.98
A272253.170.56
PU272253.150.73
PU172253.140.68
A372143.110.70
E172253.030.58
BI172253.030.67
E272142.990.68
SI272142.780.68
BI372142.720.70
BI272152.710.68
UNT572152.651.09
A172152.920.71
SI172152.070.97
Table 6. Cluster 2.
Table 6. Cluster 2.
Descriptive Statistics of Cluster 2
NMinimumMaximumMeanStd. Deviation
UNT4112254.000.64
A2112253.990.55
PU3112253.970.64
BI1112253.930.65
E1112353.860.57
PEoU1112253.860.73
PU2112253.840.62
PU1112253.820.57
UNT1112153.800.88
A3112353.790.54
A1112153.780.63
PEoU2112253.770.71
BI2112253.700.67
E2112253.660.65
BI3112253.650.63
UNT3112153.551.05
UNT6112153.550.88
UNT2112153.390.97
SI2112253.380.69
SI1112152.960.92
UNT5112142.660.89
Table 7. Cluster 3.
Table 7. Cluster 3.
Descriptive Statistics Cluster 3
NMinimumMaximumMeanStd. Deviation
UNT436253.440.94
UNT136153.001.04
UNT336152.971.03
UNT636152.970.88
PEoU136152.971.11
UNT236152.941.15
PEoU236152.831.16
PU336142.330.96
PU236132.280.66
A236142.250.73
A336142.220.90
PU136142.190.75
SI236152.171.00
A136132.080.81
E136141.720.74
E236131.640.68
SI136131.640.72
BI336131.580.73
BI236131.560.61
BI136131.690.67
UNT536141.860.64
Table 8. Cluster 4.
Table 8. Cluster 4.
Descriptive Statistics Cluster 4
NMinimumMaximumMeanStd. Deviation
PU382354.820.42
PU282354.770.45
A182354.740.52
E182354.740.49
A382354.730.50
BI182354.730.47
A282254.720.57
PU182354.710.48
E282254.700.60
BI382154.680.65
BI282254.650.62
PEoU282154.570.74
PEoU182154.460.82
UNT482254.390.75
UNT182154.260.95
UNT682254.180.85
UNT382153.961.14
SI282153.900.92
UNT282153.501.15
UNT582153.291.18
SI182153.291.36
Table 9. Summary of the clusters.
Table 9. Summary of the clusters.
ClustersNumber of RespondentsTechnological OpennessAI AcceptanceWillingness to PurchaseSocial InfluenceDescription
172ModerateSlightly PositiveLowNeutralModerately interested, but not active users
2112HighHighMediumMediumOpen to AI, likely users
336LowVery LowMinimalLowAI sceptics, resistant users
482Very HighOutstandingHighHighInnovators, AI enthusiasts, active users
Table 10. Measurement model: Constructs, Items, and Outer Loadings.
Table 10. Measurement model: Constructs, Items, and Outer Loadings.
ConstructItemOuter Loading
Perceived UsefulnessPU10.945
PU20.945
PU30.938
Perceived Ease of UsePEOU10.906
PEOU20.931
AttitudeA10.933
A20.933
A30.921
Social InfluenceSI10.827
SI20.901
EnjoymentE10.966
E20.963
Behavioural IntentionBI10.917
BI20.936
BI30.924
Table 11. Reliability and validity indicators to evaluate the measurement model.
Table 11. Reliability and validity indicators to evaluate the measurement model.
Cronbach’s Alpharho_arho_cAVE
A0.9210.9210.9500.864
BI0.9170.9170.9470.857
E0.9250.9270.9640.931
PEOU0.8150.8280.9150.843
PU0.9370.9370.9600.889
SI0.6680.6990.8560.748
Table 12. Multicollinearity Check.
Table 12. Multicollinearity Check.
ConstructVIF
PU → BI1.89
PEOU → BI1.65
A → BI2.05
SI → BI2.12
E → BI1.78
Table 13. Correlation of the dimensions investigated.
Table 13. Correlation of the dimensions investigated.
ABIEPEOUPUSI
A1.0000.7520.7380.4500.7780.552
BI0.7521.0000.8770.4900.7440.615
E0.7380.8771.0000.4970.7030.586
PEOU0.4500.4900.4971.0000.5070.246
PU0.7780.7440.7030.5071.0000.520
SI0.5520.6150.5860.2460.5201.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Koteczki, R.; Balassa, B.E. Exploring Generation Z’s Acceptance of Artificial Intelligence in Higher Education: A TAM and UTAUT-Based PLS-SEM and Cluster Analysis. Educ. Sci. 2025, 15, 1044. https://doi.org/10.3390/educsci15081044

AMA Style

Koteczki R, Balassa BE. Exploring Generation Z’s Acceptance of Artificial Intelligence in Higher Education: A TAM and UTAUT-Based PLS-SEM and Cluster Analysis. Education Sciences. 2025; 15(8):1044. https://doi.org/10.3390/educsci15081044

Chicago/Turabian Style

Koteczki, Réka, and Boglárka Eisinger Balassa. 2025. "Exploring Generation Z’s Acceptance of Artificial Intelligence in Higher Education: A TAM and UTAUT-Based PLS-SEM and Cluster Analysis" Education Sciences 15, no. 8: 1044. https://doi.org/10.3390/educsci15081044

APA Style

Koteczki, R., & Balassa, B. E. (2025). Exploring Generation Z’s Acceptance of Artificial Intelligence in Higher Education: A TAM and UTAUT-Based PLS-SEM and Cluster Analysis. Education Sciences, 15(8), 1044. https://doi.org/10.3390/educsci15081044

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop