Next Article in Journal
Design of Logistics Platform Business Models in the View of Value Co-Creation
Previous Article in Journal
Dynamic Decoding of VR Immersive Experience in User’s Technology-Privacy Game
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Factors Influencing Generative AI Usage Intention in China: Extending the Acceptance–Avoidance Framework with Perceived AI Literacy

1
School of Public Policy and Administration, Xi’an Jiaotong University, Xi’an 710049, China
2
School of Management, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Systems 2025, 13(8), 639; https://doi.org/10.3390/systems13080639 (registering DOI)
Submission received: 18 June 2025 / Revised: 18 July 2025 / Accepted: 22 July 2025 / Published: 1 August 2025
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)

Abstract

In the digital era, understanding the intention to use generative AI is critical, as it enhances productivity, transforms workflows, and enables humans to focus on higher-value tasks. Drawing upon the unified theory of acceptance and use of technology (UTAUT) and the technology threat avoidance theory (TTAT), this research integrates perceived AI literacy into the AI acceptance–avoidance framework as a central variable. This study gathered 583 valid survey responses from China and validated its model using a dual-phase, combined method that integrates structural equation modeling and artificial neural networks. Research findings indicate that the model explains 51.6% of the variance in generative AI usage intention. Except for social influence, all variables within the extended framework significantly impact the usage intention, with perceived AI literacy being the strongest predictor (β = 0.33, p < 0.001). Additionally, perceived AI literacy mitigates the adverse effect of perceived threats on the intention to use AI. Practical implications suggest that enterprises adopt a tiered strategy, as follows: maximize perceived benefits by integrating AI skills into reward systems and providing task-automation training; minimize perceived costs through dedicated technical support and transparent risk mitigation plans; and cultivate AI literacy via progressive learning paths, advancing from data analysis to innovation.

1. Introduction

The digital transformation of industry is driving an evolution from automation-centric processes to more collaborative, human-centered approaches. While traditional industrial frameworks prioritized operational efficiency through technology-driven optimization, modern approaches increasingly emphasize the reintegration of human creativity into advanced technological systems [1,2]. This shift aims not merely to improve working efficiency, but, more importantly, to liberate employees from repetitive tasks through meaningful human–AI collaboration [3]. Generative AI has emerged as a pivotal driver of this transformation. Unlike traditional AI tools confined to narrow application scenarios, generative AI systems possess the capabilities for cross-scenario operation, interpretation of complex unstructured information, and support for natural human–computer interaction through multimodal interfaces [4]. These capabilities allow AI systems to augment human labor by automating routine tasks, thereby freeing cognitive resources for creative and higher-order thinking [1,5].
However, no matter how advanced the technology, its actual impact ultimately depends on human acceptance and meaningful utilization. This is particularly crucial in the context of Industry 5.0, which—like generative AI—is inherently human-centric. Specifically, Industry 5.0 relies on generative AI to automate intricate processes in smart manufacturing, healthcare, and logistics, while the effectiveness of generative AI in these scenarios directly hinges on human trust and proactive collaboration [2,5]. A vivid example is Siemens’ industrial copilots. Generative AI-based systems can respond to verbal instructions, dynamically optimize processes, and adapt to user preferences, intuitively demonstrating the potential of this technology to reshape workplace collaboration models.
Beyond industrial use cases, generative AI applications such as ChatGPT and DeepSeek have allowed users to efficiently generate coherent text, images, and code with minimal input, drastically reducing time and effort for various cognitive tasks [6,7]. In China, generative AI has exhibited substantial advancements. According to relevant data, by 2024, the number of enterprises involved in generative AI in China is expected to exceed 4500, with the core industry scale nearing CNY 600 billion [8]. An extensive industrial chain, encompassing fundamental research, technological development, and industrial application, has been established, thereby facilitating the widespread application and innovative advancement of generative AI across various sectors in China. These developments collectively highlight the increasingly crucial role of generative AI in shaping the future of work. However, the extent of technological implementation will ultimately depend on users’ actual acceptance and willingness to adopt it. Despite the rapid enhancement of generative AI’s technical capabilities, user engagement remains inconsistent. For instance, research indicates that while some users enthusiastically embrace these tools, others exhibit skepticism or resistance due to concerns about accuracy, privacy, job displacement, or ethical issues [9,10].
When it comes to the antecedents of generative AI usage intention, studies are mainly focused on UTAUT. While widely explored, the application of UTAUT to generative AI has yielded inconsistent results [11,12], and UTAUT primarily focuses on positive motivators for technology use, often neglecting users’ avoidance behaviors, particularly those triggered by emerging, unfamiliar, or potentially disruptive technologies, such as generative AI [13]. For example, while researchers have verified the significant influence of performance expectancy and facilitating conditions (both key factors of UTAUT) on mobile banking adoption [12], recent research found that these two constructs exerted a non-significant influence on generative AI adoption [11]. Moreover, existing UTAUT studies are largely concentrated in the educational sector, leaving sectors such as manufacturing, services, and other industries underexplored—sectors where human–AI collaboration and automation threat perceptions may significantly alter adoption dynamics [12,14]. To address this gap, TTAT provides a complementary perspective and posits that individuals evaluate technologies based on perceived threats, such as security vulnerabilities or loss of control, and that such perceptions shape avoidance motivations [15]. In the case of generative AI, users may be concerned about misinformation, AI-generated errors, or over-reliance, all of which can affect their decision to adopt generative AI [9,16]. Therefore, combining UTAUT and TTAT enables a more holistic understanding of both approach and avoidance motivations in the context of generative AI adoption.
However, the translation of user motivations into actual behavior hinges critically on their cognitive capacities to interact with AI systems, a dimension unaccounted for in either UTAUT or TTAT frameworks [17,18]. As generative AI becomes increasingly integrated into everyday work and life, the ability to understand, evaluate, and interact with AI technologies becomes crucial, not only for the adoption of AI but also for the responsible use of AI [19]. This introduces the concept of perceptual AI literacy (PAIL), which is defined as the ability to self-assess and meaningfully understand and interact with AI systems [20,21]. Perceptual AI literacy includes awareness of the capabilities and limitations of AI, understanding of ethical implications, and confidence in making informed decisions about the willingness to use AI [22]. A WEF report [23] indicates that 74.9% of business organizations plan to adopt AI technology by 2027. However, relatively few employees have received relevant skills training. This imbalance between technology penetration and capability readiness is widespread across countries. In this context, people with a higher awareness of AI literacy tend to better recognize the benefits of AI while dealing with the associated risks, thus more effectively balancing the acceptance and avoidance responses [24]. The contradiction between the speed of technological development and the level of public awareness makes perceived AI literacy a potentially key motivation and moderating factor influencing willingness to use generative AI [3]. Understanding this cognitive dimension can inform educational strategies, platform designs, and policy interventions aimed at achieving inclusive AI adoption.
Motivated by the theoretical and empirical gaps in China’s generative AI adoption landscape, this study constructs an integrated framework synthesizing the acceptance–avoidance paradigm with UTAUT, TTAT, and perceived AI literacy. Specifically, we explore how performance expectations, effort expectations, social impact, and facilitating conditions (from UTAUT); perceived threats and perceived avoidability (from TTAT); and cognitive self-efficacy (through PAIL) interact to influence behavioral willingness to use generative AI across various industries in China. This comprehensive, multi-dimensional theoretical model addresses existing research gaps by integrating cognitive factors into the framework of human–machine collaboration, considering proximity and avoidance motives as well as cognitive abilities. It elucidates the adoption of generative AI from multiple perspectives, acknowledging that willingness is influenced by both rational evaluation and emotional response, and is further modulated by an individual’s perceptual capacity. By clarifying the mechanisms through which various factors impact the willingness to engage with generative AI, this study enhances the theoretical understanding of generative AI adoption, achieves theoretical advancements, and contributes to the broader discourse on technology acceptance and AI applications.
The rest of the study is organized as follows: Section 2 provides an in-depth literature review and hypothesis development. Section 3 provides a detailed account of the research methods, and Section 4 presents the results of partial least squares (PLS) tests and artificial neural network analyses. Section 5 explores the theoretical and administrative implications of the findings. Finally, Section 6 points out the limitations of the study and presents directions for future research.

2. Theoretical Foundations and Hypothesis Development

2.1. UTAUT, TTAT, Perceived AI Literacy, and Generative AI Usage Intention

The swift advancement of generative AI, exemplified by large language models, image generators, and autonomous coding agents, has significantly transformed interactions between individuals, organizations, and intelligent systems, thereby enhancing innovation and economic performance in firms [25,26,27]. This shift is particularly pronounced in China, where the integration of generative AI is deeply rooted in a cultural emphasis on technological progress and a positive legal framework. At the cultural level, China has long regarded technological innovation as the cornerstone of national development, fostering social acceptance of AI tools through extensive digitalization and government-led “smart upgrade” campaigns across industries. As these technologies increasingly permeate areas such as education, healthcare, information technology, and manufacturing, understanding the factors that drive users to adopt generative AI has become a key research focus. The research regarding generative AI usage intention primarily follows the technology acceptance model [28], which identifies perceived usefulness and ease of use as key determinants of individuals’ and organizations’ technology adoption behavior. This model has served as a foundational framework for subsequent studies assessing user attitudes and intentions toward emerging digital technologies.
However, the unique opportunities and risks associated with generative AI necessitate an expanded analytical lens. To address this, researchers have developed integrated technology acceptance frameworks that extend beyond traditional models, including the UTAUT. As a meta-theory synthesizing eight prior models (e.g., TAM, TPB), UTAUT establishes performance expectancy, effort expectancy, social influence, and facilitating conditions as core predictors of IT adoption. Its application to generative AI tools reveals critical influences on user acceptance; students’ adoption intention is significantly shaped by perceived effort requirements (effort expectancy), output reliability concerns (performance expectancy), peer/organizational pressures (social influence), and technical/institutional support structures (facilitating conditions) [29].
Nevertheless, UTAUT predominantly emphasizes technology’s positive utility while inadequately addressing risk perceptions and avoidance tendencies toward emerging technologies. While generative AI offers notable advantages, users often experience psychological barriers stemming from perceived threats. For example, Pramod et al. [30] reported that concerns over misinformation generation and deepfake misuse significantly suppressed usage intentions of generative AI among social media users. Similarly, generative AI adoption in creative industries has been investigated [31], and the findings particularly emphasized security concerns in construction supply chains, where AI adoption is hindered by cybersecurity threats, cost–risk trade-offs, algorithmic opacity, and challenges in establishing trust and quantifiable benefits.
Therefore, some studies have applied TTAT to examine privacy concerns associated with AI-generated content, revealing that individuals with high perceived threat severity were less likely to adopt AI applications. TTAT, initially proposed by Liang and Xue [32], provides a threat appraisal framework that explains how individuals assess and respond to perceived IT-related threats. The theory outlines four key factors—perceived threat susceptibility, perceived threat severity, safeguard effectiveness, and self-efficacy—that jointly influence an individual’s motivation to avoid or reject a technology. Within the realm of generative AI, TTAT has been adopted to investigate users’ defensive reactions toward tools like ChatGPT, especially when potential harms are perceived to outweigh functional benefits.
Whether examining UTAUT’s motivational drivers or TTAT’s threat mitigation mechanisms, their operational efficacy remains fundamentally contingent on users’ cognitive engagement with AI technology. Literacy—originally denoting reading/writing proficiency—now encompasses domain-specific skill acquisition coupled with contextually adaptive abilities to understand, interpret, apply, and communicate such competencies [33]. Given artificial intelligence’s nature as a new technological science simulating and extending human intelligence [34], AI literacy emerges as critical for evaluating human–AI interaction capabilities. Specifically, it entails the capacity to ethically identify, utilize, and assess AI products [34], while enabling intuitive operational proficiency and effective tool management [35]. This construct formalizes core competencies, allowing users to critically evaluate AI systems, efficiently collaborate with algorithmic assistants, and confidently deploy solutions across diverse settings [20]. Empirical evidence confirms its impact; AI literacy accelerates educators’ technological–pedagogical knowledge, directly increasing generative AI adoption in higher education [18], while substantially strengthening students’ attitude–behavioral linkages [17].
Beyond productivity enhancements, generative AI simultaneously introduces ethical dilemmas spanning data privacy violations, algorithmic biases, and deepfake dissemination. Addressing these necessitates systematic literacy development, integrating three pillars—technical knowledge acquisition, applied skill refinement, and ethical critical thinking. Crucially, objective AI literacy diverges from perceived AI literacy—the latter more directly determines psychological acceptance and behavioral outcomes. Users with elevated perceived literacy demonstrate heightened ethical responsibility awareness and proactive risk mitigation, fostering sustainable AI implementation [36]. This perception significantly mediates relationships between perceived usefulness/ease of use and adoption intentions [22], underscoring the necessity of integrating AI literacy into user training, system design, and policy efforts to ensure equitable and confident usage of generative AI technologies.
Overall, existing research still exhibits three critical gaps in explaining willingness to use generative AI, necessitating an integrated theoretical framework. First, most empirical applications of UTAUT and TTAT are concentrated in the educational domain, particularly among students and educators. Research on generative AI adoption is limited in sectors such as manufacturing and customer-facing service industries, where contextual variables (e.g., automation threat or human–AI collaboration models) may lead to different adoption dynamics, and the existing results about UTAUT and TTAT on technology usage intention are still inconsistent. Second, while AI literacy is acknowledged as pivotal, existing research remains limited and fragmented, with few studies having integrated it into TTAT models to further explore its mechanisms on generative AI usage intention. This gap persists despite empirical evidence from Cho and Jeong [36], whose findings demonstrate AI literacy can play a critical moderating effect between privacy and perceived usefulness and ChatGPT usage. Third, there is a conspicuous absence of an integrated framework that combines AI literacy, opportunity-enhancing and threat-avoiding factors, and assesses the relative importance of competing drivers and inhibitors in forming user generative AI usage intention. Therefore, by integrating based on UTAUT, TTAT, and perceived AI literacy, this study endeavors to provide a comprehensive framework that elucidates the determinants and their significance for generative AI usage intention within the education, manufacturing, information technology (IT), and service sectors in China.

2.2. UTAUT and Generative AI Usage Intention

UTAUT points out that individuals’ and organizations’ behavioral intentions are determined by the following four core factors: performance expectancy (PE), effort expectancy (EE), social influence (SI), and facilitating conditions (FC) [29,37].
Performance expectancy refers to the extent to which individuals perceive that a technology will improve their productivity, efficiency, and overall outcomes [37]. Firstly, generative AI elevates learning outcomes by offering customized learning opportunities, generating study materials, and offering instant feedback [14]. For example, studies demonstrate that performance expectancy critically shapes students’ behavioral intentions toward using generative AI in educational contexts [38], as they perceive that these tools can improve their educational performance. Secondly, generative AI is capable of managing repetitive tasks, thereby enabling people to concentrate on intricate and creative work [14,39]. This perceived improvement in efficiency can drive users to adopt generative AI [40], and previous research also confirmed that performance expectancy critically influences individuals’ willingness to engage with generative AI [39]. Thirdly, generative AI can inspire new ideas and solutions by generating novel content and suggestions, enhancing innovation and creativity, which users might find beneficial and inspiring, particularly for new product development [13,41]. For example, Xia and Chen [13] pointed out that performance expectancy can significantly promote users’ generative AI usage intention in new product development, as generative AI not only improves innovation efficiency but also introduces new technologies and knowledge, facilitating the generation of new ideas. Therefore, when users perceive a higher level of performance expectancy, they are more inclined to use generative AI. Consequently, we have the following:
H1. 
Performance expectancy will positively affect users’ generative AI usage intention.
Effort expectancy is the level of ease in utilizing a particular technology [37], and its importance in generative AI stems primarily from its role in reflecting users’ opinions on the technology’s user-friendliness and accessibility [42]. Firstly, users are more inclined to adopt technologies that are easy to comprehend and require minimal effort to learn [43]. User-friendly designs reduce learning demands, increasing adoption intention when users perceive low operational effort. Thus, technology acceptance intensifies with system usability [44]. Additionally, the availability of generative AI tools across multiple devices (e.g., mobile phones, laptops, tablets) can enhance users’ intention to engage with the technology by reducing effort expectancy [45]. This accessibility enables users to interact with the technology more conveniently, thereby further augmenting their intention to use it. Consequently, we propose the following:
H2. 
Effort expectancy will positively affect users’ generative AI usage intention.
Social influence is the degree to which a person feels that significant others, such as colleagues, friends, or supervisors, think they should adopt a specific technology [37]. Social influence is a key factor in influencing users’ intentions to utilize a certain technology. Firstly, social influence can exert peer pressure and provide encouragement within a network, thereby increasing the likelihood that users will adopt generative AI if they perceive their peers and colleagues are utilizing it [45,46]. For example, behavioral intention shows significant dependence on social influences from academic and peer sources [47]. Moreover, social influence can also emanate from organizations, influenced by the attitudes and cultures of top management towards AI technologies, such as organizational AI training programs and positive reinforcement from supervisors [48]. When users perceive organizational support for generative AI, they are more inclined to adopt it to gain legitimacy within the organization [49]. For example, Yakubu et al. [14] demonstrated that social influence through peer pressure and environmental support can significantly enhance people’s learning and usage of content-generative AI tools. Nevertheless, the efficacy of social influence is subject to contextual contingencies, as its impact may be moderated by individual differences or situational factors [50,51]. Hence, we propose the following:
H3. 
Social influence will positively affect users’ generative AI usage intention.
Facilitating conditions reflect perceived availability of essential resources and support for utilizing a specific technology [37], and are significant as they embody user-perceived accessibility of technical infrastructure, organizational training, and resource support. Firstly, organizational support in terms of technical infrastructure—such as the provision of high-speed internet, well-integrated AI platforms, and access to digital resources—has a major effect on users’ willingness to engage with generative AI [48]. For instance, Jo and Bang [48] identified network quality, accessibility, and system responsiveness as critical predictors of ChatGPT adoption, usage intention, and user satisfaction. Furthermore, beyond infrastructure support, organizational training and education related to AI are vital for enhancing user satisfaction. Access to AI-related training programs and educational resources can bolster users’ confidence and competence within organizations, thereby mitigating and preventing the risk of AI-related issues [16,52]. Consequently, providing organizational technical infrastructure support along with AI-related training and education can enhance their willingness to adopt AI tools. Thus, we propose the following:
H4. 
Facilitating conditions will positively affect users’ generative AI usage intention.

2.3. TTAT and Generative AI Usage Intention

Malicious IT programs—including viruses, spam, spyware, and adware—have demonstrably compromised personal and corporate IT infrastructure, resulting in significant economic losses [53]. Developed through an integration of literature from healthcare, cybernetics, psychology, and information systems, TTAT elucidates the motivations and mechanisms underlying users’ avoidance of malicious IT threats [32,54]. The theory claims that people first evaluate the presence and magnitude of an IT threat before determining appropriate mitigation strategies [55]. Crucially, TTAT conceptualizes threat avoidance as a cybernetic process where individuals endeavor to expand the disparity between their current secure state and a potential insecure state [32,56]. The actions undertaken by users are contingent upon their assessment of the threat’s avoidability. When threats are perceived as avoidable, users typically engage in problem-focused protective measures; conversely, when threats are deemed unavoidable, users often resort to emotion-focused coping strategies [15]. Consequently, users may refrain from utilizing specific information technology systems, such as generative artificial intelligence, if they perceive substantial unmitigable risks, yet they may persist in their usage if they believe that threats can be effectively managed.
Central to the predictive efficacy of TTAT are two pivotal constructs, as follows: perceived threats and perceived avoidability. Perceived threats initiate the avoidance process by indicating potential harm, while perceived avoidability determines the feasibility and nature of the response [15,32]. Collectively, these constructs constitute the primary cognitive appraisal mechanisms through which TTAT explicates behavioral outcomes, making them indispensable for investigating the adoption of generative artificial intelligence. Their selection is also theoretically substantiated, as they embody the fundamental dimensions of threat evaluation (perceived threat) and coping appraisal (perceived avoidability) that collectively drive avoidance motivation and subsequent actions within the TTAT framework.
Perceived threats describe how a person assesses the danger or harm connected with a particular information technology event or system [15]. Information technology security threats are pervasive and indiscriminate, possessing the capacity to impact anyone, at any time, and in any place. Such threats often provoke negative emotional responses, including fear and anxiety, due to the potential for substantial losses. In the context of generative artificial intelligence, users who perceive threats express concerns about potential risks such as data breaches, misinformation generation, or data misuse, all of which could disrupt tasks or cause personal or professional harm. These perceived threats can negatively influence the willingness to adopt generative AI technologies. Supporting this notion, research demonstrates that perceived risks adversely affect attitudes toward AI-driven smart healthcare among non-clinicians [57] and shape stock investors’ intentions to use robo-advisors [58]. This evidence suggests that the decision to refrain from using such technologies is frequently regarded as the most effective strategy for risk mitigation. Therefore, we have the following:
H5. 
Perceived threats will negatively affect users’ generative AI usage intention.
Perceived avoidability reflects one’s assessment of the probability of successfully circumventing IT security threats related to a certain technology. This assessment is informed by the perceived effectiveness, feasibility, and cost-efficiency of available protective measures [32,59]. According to the technology threat avoidance theory (TTAT), perceived controllability influences coping strategies. Users tend to adopt problem-focused coping when they perceive a high degree of control, whereas emotion-focused coping is more common when perceived control is limited [59]. Therefore, perceived avoidability encapsulates the users’ perceived control over security threats related to generative AI. A heightened sense of control enhances feelings of safety and emotional stability during usage, whereas diminished perceived control may lead to insecurity and emotional disturbance. Consequently, when users perceive the level of avoidability is high, they are inclined to believe that effective and feasible protective measures are available, thereby reinforcing their sense of security. This perception of security and control over potential threats can positively impact their willingness to engage with the technology. Thus, the following hypothesis is proposed:
H6. 
Perceived avoidability will positively affect users’ generative AI usage intention.

2.4. Perceived AI Literacy and Generative AI Usage Intention

Perceived AI literacy refers to the level of understanding and competence individuals have in using and interacting with AI technologies [36]. Therefore, higher AI literacy can boost users’ confidence, making them more willing to adopt and integrate AI tools into their activities [18]. For example, some studies find that AI literacy can promote users’ perceptions and intentions to use AI tools in the education industry [22]. Moreover, Al-Abdullatif [18] points out that AI literacy can also promote the user’s perceived trust, which then increases the usage of generative AI. Secondly, increased literacy helps users comprehend the strengths and weaknesses of generative AI, resulting in more informed and positive perceptions of its potential benefits. According to Almatrafi et al. [60], AI literacy significantly enhances users’ beliefs concerning the potential advantages of AI tools. Similarly, Bozkurt and Sharma [61] highlighted that AI literacy can reduce the perceived complexity and effort required to use AI tools. That is, AI literacy empowers users to understand the impact of minor language variations on generative AI responses, leading to enhanced user-AI communication and interaction, which ultimately shapes their intentions to use this technology. Therefore, we propose the following:
H7. 
Perceived AI literacy will positively affect users’ generative AI usage intention.
Beyond its direct impact, perceived AI literacy plays a crucial role in moderating the relationship between perceived threat and users’ generative AI usage intention. Perceived threat refers to users’ subjective evaluation of systemic risks associated with generative AI applications, encompassing a range of multidimensional concerns, as follows: privacy and security vulnerabilities, such as data leakage and algorithmic bias; ethical challenges, including deepfakes and ambiguities in responsibility; and socioeconomic disruptions, such as employment restructuring and skill displacement [10]. Functioning as a multidimensional cognitive capability, perceived AI literacy encompasses technical understanding, ethical discernment, and risk mitigation proficiency. Enhanced literacy reduces threat perception through two mechanisms: cognitive restructuring allows users to comprehend AI’s foundational logic—such as machine learning workflows and data processing patterns—thereby transforming abstract risks into manageable challenges; while behavioral empowerment enables proactive countermeasures, including data anonymization techniques, smart contract audits, and critical output evaluation using established ethical frameworks [56,57]. In essence, users with advanced AI literacy can not only identify potential vulnerabilities but also develop a comprehensive protection system.
This competency fundamentally reshapes threat perception. When users possess demonstrated capacity to neutralize risks, perceived threat evolves from an overwhelming systemic hazard into a manageable technical challenge. Consequently, perceived AI literacy moderates threat effects by enhancing users’ risk control efficacy, ultimately attenuating its inhibitory influence on usage intention. Hence, the following research hypothesis is proposed:
H7a. 
Perceived AI literacy will positively moderate the relationship between perceived threat and users’ generative AI usage intentions.
Similarly, perceived AI literacy significantly strengthens the relationship between perceived avoidability and users’ generative AI usage intention. Perceived avoidability represents users’ subjective assessment of technology avoidance costs, integrating direct expenditures such as time and financial inputs with indirect costs including opportunity loss and social disconnection [15]. This concept includes direct costs (e.g., time and economic input) and indirect costs (e.g., missing potential benefits, social isolation), reflecting the risk trade-off mechanism in individuals’ technology decision-making process [62].
AI literacy serves a dual moderating function in this context. According to the cognitive value theory of technology adoption [63], enhanced AI literacy significantly elevates users’ cognitive understanding of the transformative potential of technology. This understanding extends beyond recognizing explicit benefits such as increased efficiency (e.g., automated content creation, accelerated data processing) and innovation empowerment (e.g., fostering creativity, integrating cross-domain knowledge) associated with generative AI. It also encompasses an appreciation of implicit values, such as technology-driven social advancement and industrial upgrading. When users develop a comprehensive understanding of AI’s multi-dimensional value, avoiding its use is no longer perceived as a risk-reduction strategy, but rather as an opportunity cost resulting from the deliberate forfeiture of potential value [63].
On the other hand, according to the risk perception and management theory [63], AI literacy equips users with a structured framework for risk response. Individuals possessing advanced literacy levels are adept at employing security protocols to convert perceived avoidability into a psychological safety boundary conducive to technology adoption. This transformative mechanism facilitates the evolution of users’ understanding of “avoidability” from a passive stance of “I can avoid harm” to an active approach of “I can safely harness technical value”. Through the combined effects of “value cognition enhancement” and “risk control empowerment”, AI literacy substantially amplifies the positive influence of perceived avoidability on the intention to use technology. Consequently, the following research hypothesis is proposed:
H7b. 
Perceived AI literacy will positively moderate the relationship between perceived avoidability and users’ generative AI usage intention.
As depicted in Figure 1, the research model is developed based on the assumptions mentioned above, as well as the control variables.

3. Methodology

3.1. Instrument Development

Eight constructs were measured on seven-point Likert scales. Existing scales were used when available; otherwise, items were derived from the most similar scales. Measures of performance expectancy, effort expectancy, social influence, and facilitating conditions were drawn from Venkatesh et al. [64] and Cao et al. [65]. Measures of perceived threat were adopted from Baabdullah et al. [66] and Liang et al. [15], and items of perceived avoidability were drawn from Liang et al. [56]. Perceived AI literacy is measured by the Perceived Artificial Intelligence Literacy Questionnaire (PAILQ-6) [67], and generative AI usage intention is measured by Venkatesh et al. [64] and Meng et al. [68]. Following previous studies concerning AI usage intention, gender, age, educational level, industry, and work experience are chosen as control variables [65,66,69]. Specific items for the scales are provided in Table A1.
Following the procedures of the back-translation method [70], the English scales were translated into Chinese for the respondents. Initially, the authors translated the original items into Chinese, making minor wording adjustments to ensure alignment with the research context. Subsequently, a group of bilingual doctoral candidates independently performed back-translation to identify and reconcile conceptual discrepancies.
To establish content validity, a structured expert review was performed. Three domain experts (two academic researchers and one industry practitioner) independently evaluated each item’s theoretical relevance and contextual appropriateness. Their assessments underwent iterative consensus discussions until unanimous agreement was reached on a finalized item set comprehensively covering all target constructs.
A formal pilot analysis was then administered at a Chinese technology firm using a 30-employee sample. This phase employed cognitive interviews and iterative revisions to evaluate item clarity, logical flow, and administrative feasibility. Ambiguous expressions were refined through participant feedback cycles, with problematic items eliminated to produce the psychometrically optimized Chinese questionnaire.
In this study, the item PAIL2: “I believe I can contribute to Generative AI projects” was dropped from the perceived AI literacy (PAIL) scale due to both theoretical and empirical concerns. Firstly, conceptually, perceived AI literacy refers to individuals’ ability to understand, evaluate, and responsibly engage with AI technologies [21], rather than their capacity to contribute to AI projects. PAIL2 introduces a notion of self-assessed competence in contributing to AI development, which aligns more closely with self-efficacy or professional capability than with perceived literacy [20]. Including this item risks conflating cognitive understanding with project-based competence, thereby weakening the construct validity of the scale. Secondly, PAIL2 lacks dimensional alignment with the remaining items, which emphasize knowledge, critical judgment, communication, and creative application—dimensions more consistent with established definitions of perceived AI literacy [24]. Contributing to generative AI projects often requires advanced technical skills and contextual resources that go beyond literacy and may introduce irrelevant variance [71]. Thirdly, the results of reliability and validity of measurements also show that the factor loading of PAIL2 was below 0.50, which indicates that PAIL2 has a weak correlation with the underlying latent construct and suggests that the item does not contribute meaningfully to the measurement of perceived AI literacy. Therefore, in this study, the PAIL2 has been dropped.

3.2. Data Collection

This study employed purposive convenience sampling with sector stratification in a major western Chinese provincial capital—a national hub for education, technology, and manufacturing—to optimize accessibility while ensuring contextual representativeness. Industry stratification targeted five generative AI-intensive domains (IT, finance, manufacturing, education, services), identified through regional economic reports [72]. Companies were selected via academic–industry networks based on active AI implementation, with employee eligibility strictly limited to full-time staff (>6 months’ tenure), systematically excluding contractors and interns.
A total of 1000 questionnaires were administered through industry partnerships: senior managers and HR departments facilitated on-site distribution with Survey One (dependent variables/moderators/demographics), followed by Survey Two (independent variables) after a two-week interval. For employees who completed Wave 1 but were absent during Wave 2 on-site sessions, HR managers proactively arranged physical questionnaire completion upon their return to company premises. Completed paper surveys were then returned to the research team via express mail. Cross-wave matching utilized unique participant IDs (last 4 mobile digits) recorded in the top-right corner of both questionnaires, yielding 613 returns. After excluding submissions with missing data or response inconsistencies, 583 valid responses were retained (58.3% effective rate).

3.3. Data Analysis

Measurement adequacy and structural validity were examined via partial least squares (PLS), with SmartPLS 4.1.0 serving as the primary analytical tool. Given that PLS is primarily aimed at optimizing the prediction of target variables [73], it is well-suited for testing exploratory models and analyzing complex interactions [74], aligning with the objectives of our study. Moreover, the specific choice of SmartPLS 4.1.0 is justified by its advanced methodological capabilities essential for validating complex models and handling intricate interaction terms [75]. These enhancements collectively enable rigorous validation of complex frameworks while optimizing computational efficiency.

4. Results

4.1. Respondent Profile

As summarized in Table 1, out of the 583 respondents, 50.8% were male. Participant demographics reveal balanced gender representation and high engagement of prime working-age cohorts (90.8% aged 20–49). The sample captures extensive industry experience (65.1% with >5 years’ tenure) and elevated education levels (92.7% bachelor’s or higher). Critically, industry distribution—IT (28.8%), manufacturing (17.8%), finance (16.1%), education (22.0%), and services (15.3%)—directly aligns with sectors identified in China’s AI Development Plan as primary adopters of generative AI [72]. This intentional inclusion of key generative AI-adopting sectors, combined with the city’s dual economic structure, ensures the findings capture critical variations in generative AI usage intention.

4.2. Non-Response Bias and Common-Method Bias

To rigorously evaluate potential non-response bias (NRB), we employed a dual-validation approach. First, temporal analysis comparing early and late respondents showed no statistically significant differences in demographic attributes and core research variables (e.g., generative AI adoption intention), indicating minimal NRB risk [76]. Second, structured callback interviews with five respondents and five non-respondents revealed comparable levels of adoption-related perceptions, further evidencing the absence of systematic response reluctance tied to study constructs [77]. Collectively, NRB does not compromise the validity of our findings.
Three methods were employed to control and test the extent of CMB. First, we implemented temporal separation by administering independent and dependent variables across distinct survey phases conducted at different time points. To enable accurate cross-phase participant matching, unique identifiers (last four digits of mobile numbers) were assigned during the initial phase and replicated in the second-phase questionnaire header. This temporal segregation mechanism served to disrupt respondents’ cognitive associations between variable sets, thereby mitigating the risk of artificially inflated correlations attributable to common method variance. Second, Harman’s single-factor test indicated acceptable common method bias with the largest factor accounting for 30.92% variance, less than the recommended value of 40% [78]. Third, as shown in Table A2, the unmeasured latent method construct (ULMC) test shows that the substantive variance of each indicator significantly exceeds the method variance [79]. Consequently, CMB poses minimal concern.

4.3. Measurement Model Evaluation

The reliability and validity of measurements are evaluated following conventional approaches [79]. Construct reliability was tested using Cronbach’s alpha and composite reliability score. Table 2 demonstrates that all values for Cronbach’s alpha coefficient (α) and composite reliability (CR) surpass the 0.7 threshold, ensuring construct reliability.
The validity encompasses both convergent and discriminant aspects. Convergent validity of the construct was evaluated through item factor loading scores. Table 2 illustrates that each item’s loading on its respective construct is above 0.7, indicating adequate convergent validity. Discriminant validity was tested in four ways. First, all average variance extracted (AVE) values surpass the suggested threshold of 0.5, demonstrating sufficient convergent validity. Second, Table 3 presents the Fornell–Larcker criterion results, which demonstrate that the square root values of AVE (bolded diagonal values) are greater than the correlation among the constructs (the relevant column and row values) and confirm the discriminant validity [80]. Third, all the correlations between constructs do not exceed 0.7, as shown in Table 3. Fourth, the condition for discriminant validity of these reflective constructs is satisfied, as indicated by the Heterotrait–Monotrait (HTMT) ratio values being below 0.85 [81], as shown in Table 4.

4.4. Structural Model Analysis

We evaluated the structural model using a bootstrapping method with 10,000 resamples. Figure 2 displays the test results; 51.6% variance in the dependent variable is explained, reflecting an acceptable level of explanatory effectiveness. Since the SRMR value is 0.054, which is less than the 0.08 threshold [82], the model fit is satisfactory. Moreover, employing blindfolding with an omission distance of 7, the Stone–Geisser Q2 value for the dependent variable generative AI usage intention was 0.337 (Q2 > 0), confirming the model’s predictive relevance and strong out-of-sample predictive power [83]. As expected, performance expectations (H1: β = 0.245, p < 0.001), effort expectations (H2: β = 0.242, p < 0.001), and facilitating conditions (H4: β = 0.106, p < 0.05) positively affect users’ generative AI usage intention. However, the effect of social influence (H3: β = −0.085, p > 0.05) is not significant. Perceived threats (H5: β = −0.075, p < 0.05) are negatively associated with generative AI usage intention, and perceived avoidability (H6: β = 0.99, p < 0.05) positively affects users’ usage intention. Perceived AI literacy (H7: β = 0.330, p < 0.001) also positively influences the generative AI usage intention. Additionally, none of the control variables significantly affect the intention. Thus, H1-H2 and H4-H7 are supported, but H3 is not. The effect sizes of path coefficients within the research model were assessed using Cohen’s f2 metric [84]. In accordance with established benchmarks, the high, medium, and small effects of paths are recognized by thresholds of 0.35, 0.15, and 0.02, respectively. Analysis of specific constructs revealed that PE (f2 = 0.067), EE (f2 = 0.086), and PAIL (f2 = 0.134) occur with small to medium effects.

4.5. Moderation Test

To examine hypotheses regarding moderation effects, multiplicative terms were generated by cross-multiplying items related to perceived threats and perceived AI literacy, as well as perceived avoidability and perceived AI literacy. All variables were standardized prior to calculation to address multicollinearity. The findings indicate that the path coefficient of H7a (β = 0.081, p < 0.05) is significant, but H7b (β = −0.028, p > 0.05) is not. Thus, H1–H2, H4–H7, and H8b are supported, but H3 and H8a are not. Following Cohen et al. [84], simple slopes are used to graphically represent the moderating effect of perceived AI literacy. As shown in Figure 3, two regression lines are identified for the independent variable on the dependent variable, positioned one standard deviation above and below the mean of the moderating variable. When perceived AI literacy is low, the perceived threat has a negative impact on the intention to use generative AI, and this effect becomes weaker when the level of AI literacy is high, suggesting that perceived threat exerts a stronger effect on generative AI usage intention for users whose level of perceived AI literacy is low. Following Chin et al. [85], the effect size (f2) was calculated to confirm the overall moderating effects. The resultant f2 value is 0.023, confirming a significant yet small-effect moderation in the proposed model [84].

4.6. ANN Analysis

In the empirical analysis’s second stage, we employed the artificial neural network (ANN) methodology, a computational model that emulates the structure of biological neural networks [86]. Figure 4 demonstrates the development of an ANN model using a multi-layer perceptron and the Sigmoid activation function. An input layer, which included six significant predictors (performance expectancy, effort expectancy, facilitating conditions, perceived threats, perceived avoidability, and perceived AI literacy) identified from the PLS-SEM model; a single hidden layer with 4 neurons; and an output layer representing the behavioral intention to use generative AI. The model was trained using the backpropagation algorithm [87], and its performance was evaluated using a 10-fold cross-validation strategy to mitigate overfitting. The data were randomly split into 90% for training and 10% for testing. As shown in Table 5, the average root mean square error (RMSE) values during the training and testing phases were 0.067 and 0.059, respectively, indicating that the ANN model demonstrated satisfactory predictive accuracy.
To ensure the robustness of our approach, ANN was selected as the optimal modeling approach. Among machine learning methods, SVM is best suited for high-dimensional classification, and decision trees are known for their interpretability in structured data. However, ANN excels at capturing complex, nonlinear relationships among variables, which are common in behavioral research [86,88,89]. This makes ANN highly complementary to PLS-SEM, which primarily models linear effects. Moreover, unlike traditional regression analysis techniques, ANNs are capable of modeling non-compensatory decision-making processes [90] and do not require assumptions of normality [91]. Given their causal nature and ability to detect nonlinear relationships, multi-layer perceptions (MLPs) have been recognized as an effective complement to PLS-SEM models [92], and widely used by previous studies [86,88,89].
Table 6 illustrates the sensitivity analysis performed on the neural network, which evaluated how all independent variables predict the dependent variable. Notably, perceived AI literacy exhibited the highest relative importance (100%) in predicting usage intention, followed by performance expectancy and effort expectancy with relative importance of 89.45% and 77.29%, respectively. Ranking fourth and fifth are facilitating conditions and perceived avoidability, with relative importance of 58.09% and 54.79%, respectively. Perceived threats demonstrated the lowest relative importance at 23.15%. The predictive capability of input variables (independent variables) within the artificial neural network is quantified using normalized relative importance, an analogous metric to the path coefficient in PLS-SEM. As shown in Table 7, the importance ranking of independent variables is consistent in both the ANN model and PLS-SEM, thereby corroborating the robustness of the empirical findings from the initial stage.

4.7. Discussion of the Results

First, as for the technology acceptance constructs, performance expectancy, effort expectancy, and facilitating conditions have significant effects on generative AI usage intention, and performance expectancy is the strongest predictor among them according to the ANN analysis. This finding aligns with recent international studies [93,94], highlighting that anticipated gains in efficiency and effectiveness remain the central determinants of AI adoption across diverse settings. Although the UTAUT generally identifies social influence as a significant factor in technology adoption, our study did not find a significant effect of social influence on generative AI usage intention among Chinese enterprise employees. This finding contrasts with prior research in contexts such as educational technology adoption, where social norms and peer influence often play a decisive role [14,37,48]. One possible explanation is that generative AI remains a relatively novel and exploratory technology in many organizational contexts, so established social norms regarding its use may be weak or ambiguous [65,66,69]. In addition, the private and self-directed nature of many generative AI applications could diminish the influence of external expectations [64,95,96]. For example, in voluntary or highly self-directed contexts, social influence is not a significant predictor of behavioral intention to adopt AI-based technologies [93,95]; rather, perceived usefulness and self-efficacy are more influential. This suggests that, for innovative and personally controlled tools like generative AI, individual cognitive factors may outweigh traditional social pressures. Furthermore, cultural factors such as high institutional trust and strong performance-orientation in Chinese workplaces may lead employees to prioritize perceived usefulness and individual mastery over social endorsement [97,98]. For example, Kurniasari [50] and L. Wang & Xiao [51] have pointed out that when social influence conflicts with personal values or cognition, it may instead inhibit users’ willingness to use. Therefore, social influence cannot play a significant role in promoting employees’ generative AI usage intention among Chinese enterprises.
Second, generative AI usage intention is substantially driven by perceptions of both threat and avoidability, but they influence users in opposite directions. Perceived threats exert a negative impact on the usage intention, whereas perceived avoidability has a positive one. These results validate previous empirical evidence [56,93,94], indicating that the avoidance framework of TTAT is applicable to this context. In other words, understanding the interplay between perceived threat and perceived avoidability is essential for predicting and influencing user acceptance of AI. For example, Liang & Xue [15] found that both perceived threat severity and perceived threat susceptibility significantly reduced users’ willingness to use IT systems, while the belief in effective coping mechanisms (perceived avoidability) increased intention to use. More recently, Joo et al. [94] demonstrated that, in the context of AI-based smart services, users’ perceptions of technological threat suppressed adoption intention, but a higher sense of control or avoidability promoted it. By addressing users’ concerns and highlighting the benefits, developers and policymakers can foster greater willingness to use AI technologies.
Third, perceived AI literacy poses a notable positive impact on generative AI usage intention, which is in line with prior studies [17]. Our findings extend this work by showing that perceived AI literacy not only directly promotes usage intention but also moderates the negative effect of perceived threat. That is, when individuals feel more knowledgeable and competent about AI, they are less likely to be deterred by potential risks—a pattern echoed in recent studies that advocate for multidimensional AI literacy [93,95]. However, we did not observe a significant moderating effect of AI literacy on the link between perceived avoidability and usage intention. While perceived avoidability positively correlates with usage intention as hypothesized, AI literacy’s anticipated amplification effect was absent. An alternative explanation lies in the cognitive evaluation saturation. When users possess high baseline awareness of generative AI’s transformative benefits (e.g., efficiency gains, creative augmentation), further literacy increases may not enhance avoidability’s influence. For most adopters, perceiving avoidability sufficiently enables adoption decisions regardless of literacy level—akin to how internet users adopt search engines despite varied technical knowledge. This suggests perceived avoidability operates as a universal enabler in low-stakes contexts (e.g., routine content generation), while AI literacy’s moderating role might become critical only for high-risk or high-skill contexts.

5. Implications

5.1. Theoretical Implications

This research extends existing theory in three main ways. Firstly, this paper introduces an acceptance–avoidance framework aimed at understanding the users’ generative AI usage intention. This framework is built upon the UTAUT and the TTAT. Unlike prior work that considered acceptance and avoidance separately, our model provides a unified structure for analyzing how positive drivers and threat-based inhibitors interact to shape generative AI usage intention [15,64]. On the one hand, this study refines and expands the Integrated AI acceptance–avoidance model (IAAAM). The original IAAAM primarily focused on enterprise managers, examining how these high-level decision-makers perceive and adopt AI technologies [65]. By extending the application of this model to ordinary enterprise employees, this paper enlarges the research scope to incorporate a broader range of organizational stakeholders. On the other hand, this paper enhances the content related to technology threat avoidance. Based on the IAAAM model, this paper introduces perceived avoidance (PA) in addition to the original perceived threat (PT) to more comprehensively address the influencing variables related to users’ technology avoidance [56].
Secondly, this research makes a distinct theoretical advancement by integrating perceived AI literacy into established UTAUT and TTAT frameworks, offering a more holistic and cognitively grounded model of technology adoption. While most prior studies have focused primarily on motivational or threat-related factors, our approach recognizes cognitive readiness—specifically, perceived AI literacy—as essential for effective adoption and human–machine collaboration. This is especially relevant in the context of Industry 5.0, which prioritizes human-centric, synergistic integration between employees and intelligent technologies [3]. Furthermore, by empirically validating the model in China—a setting characterized by accelerated digital transformation, strong institutional trust, and performance orientation [95,98]—our work demonstrates how contextual and cultural factors shape the pathways to AI adoption. These contributions move beyond previous AI adoption models by addressing both cognitive and environmental prerequisites for responsible, widespread generative AI use.
Thirdly, this paper introduces perceived AI literacy as a moderating variable between perceived threat and generative AI usage intention. Unlike traditional TTAT constructs, perceived AI literacy reflects users’ cognitive capacity to understand and manage AI systems, which can buffer avoidance responses and promote adoption [36]. This integration contributes to a more cognitively grounded and context-sensitive model of generative AI acceptance under the human-centric vision of Industry 5.0, broadening the boundaries of studies in the generative AI usage intention field. Moreover, the current research further broadens the theoretical applicability of UTAUT and TTAT by shifting the focus from the well-studied educational context to underexplored sectors such as manufacturing and service industries. While prior studies, such as Xia and Chen [13], Yakubu et al. [14], and Zhou et al. [12] have predominantly investigated generative AI adoption among students and educators, this study tests these frameworks in a broad operational context, such as manufacturing, services, financial, and IT industries.
Finally, this study contributes to the cultural generalizability of technology adoption research. This study investigation covers major industries with intensive AI application—namely, information technology, manufacturing, services, finance, and education—based on survey data from a representative provincial capital in western China. While mainstream models such as UTAUT and TTAT have been widely validated in Western and educational contexts, our findings reveal that key mechanisms, such as the non-significant role of social influence, are shaped by distinctive features of the Chinese organizational environment, including high institutional trust and strong performance orientation [16,34]. This highlights the importance of organizational signals and individual cognitive resources over peer influence in technology adoption decisions. However, given the focus on a specific region and cultural setting, caution is required when applying our results to other contexts. Cultural, institutional, and regional differences may moderate the observed relationships, and the generalizability of our findings should be further tested in other national or cross-cultural settings [95]. Therefore, our study underscores the necessity of adapting global technology acceptance frameworks to local environments and encourages future research to systematically examine cultural and industry-specific boundary conditions.

5.2. Managerial Implications

This study offers practical management insights from a cost-benefit perspective. Specifically, the fundamental constructs of the UTAUT—namely, performance expectations, effort expectations, and facilitating conditions—are interpreted as perceived expected benefits by employees. These constructs positively influence employees’ willingness to adopt generative AI. Conversely, constructs derived from the TTAT—such as perceived threat and perceived avoidability—represent the psychological and operational costs associated with technology use, including security risks and cognitive load. Perceived AI literacy serves as a crucial moderating factor between these two sets of constructs, enhancing the conversion of perceived benefits while mitigating perceived costs. Consequently, in the context of promoting sustainable human–machine collaboration, management strategies should prioritize the expansion of perceived benefits, the reduction of perceived costs, and the achievement of equilibrium between the two through the systematic development of AI literacy. Based on the research findings, this paper proposes the following three recommendations:
First, with the aim of maximizing perceived benefits, organizations should prioritize cultivating employees’ belief in AI’s potential to aid in achieving their objectives and their confidence in mastering AI technology, as these are essential drivers of technology adoption. Practically, proficiency in AI applications should be directly integrated into the performance reward system. Additionally, targeted training programs should be developed to visually illustrate how AI can alleviate repetitive tasks, thereby allowing employees to concentrate on higher-value activities such as creative thinking.
Furthermore, although social influence has not demonstrated a significant direct impact, adequate facilitating conditions are crucial for a successful transformation. Organizations should move beyond formal commitments by making substantial investments, such as acquiring advanced AI infrastructure, establishing dedicated technical support teams, providing on-site guidance from experts, and allocating resources for employee innovation and research and development. These measures will help reduce adaptation challenges by ensuring a reliable supply of resources and promoting sustainable human–machine collaboration. The government can incentivize enterprises to increase relevant investments through tax incentives, while simultaneously coordinating the development of regional artificial intelligence public service platforms. These platforms would provide shared infrastructure and technical support for small and medium-sized enterprises, thereby lowering the barriers for industry-wide transformation and fostering a sustainable ecosystem of human–machine collaboration.
Second, with the aim of minimizing perceived costs, companies should focus on addressing employees’ concerns regarding the potential risks associated with AI. This can be achieved by enhancing employees’ confidence in AI risk management through transparent communication, practical safeguards, and efficient feedback mechanisms. Training programs should clearly articulate the data governance mechanisms integrated into AI systems and the risk mitigation solutions implemented by operators. Furthermore, establishing a rapid-response communication channel is crucial for incorporating employees’ concerns into the system optimization process, thereby helping employees effectively navigate technical uncertainties.
Third, understanding perceived AI literacy holds dual strategic significance; it directly facilitates technology adoption while simultaneously mitigating employees’ resistance to perceived threats. To optimize these benefits, organizations should develop a progressive learning trajectory that begins with foundational operational skills and advances towards innovative applications. Implementing a weekly expert consultation session can offer a readily accessible troubleshooting guide during working hours. Additionally, establishing incentive mechanisms to clearly reward the implementation of results and the enhancement of skills is crucial. As employees advance their competencies through practice, AI tools should consistently be framed as supportive aids that augment human decision-making, rather than as replacements.

6. Limitations and Future Research

This research recognizes several limitations that indicate potential directions. First, data sourced exclusively from China established cultural boundary conditions. As a result, caution is advised when extrapolating these conclusions to other countries or regions, as cultural, social, and technological variations might greatly affect AI usage willingness and behavior. Future investigations could encompass a range of international samples, aiming to boost the external validity of the findings and explore potential cross-cultural differences in AI adoption.
Second, this study focuses on measuring AI usage intention. While intent is an important predictor of behavior, it does not directly capture actual usage behavior. Future research should employ alternative methodologies, such as longitudinal designs, field experiments, or objective usage data collection (e.g., system logs), to capture how identified factors translate into sustained adoption and usage patterns. This approach would address the intent-behavior gap while generating actionable insights for practitioners.
Third, the measurement of AI literacy in this study uses a shorter version of the scale. While it provides some insight into the role of AI literacy, future studies could investigate different dimensions of AI literacy in more detail. By developing and validating more comprehensive measures of AI literacy, researchers can better understand how specific aspects of AI-related knowledge, skills, and attitudes affect users’ interactions with AI technologies. This may involve differentiating between technical and conceptual AI knowledge, as well as examining how different levels of AI literacy interact with other factors identified in this study.

Author Contributions

Conceptualization, C.L. and X.D.; methodology, C.L. and L.Y.; software, C.L. and L.Y.; validation, L.Y.; formal analysis, C.L.; investigation, C.L. and X.L.; resources, C.L. and X.D.; data curation, L.Y.; writing—original draft preparation, C.L. and L.Y.; writing—review and editing, C.L. and X.L.; visualization, C.L.; supervision, X.D.; project administration, X.D.; funding acquisition, C.L. and X.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 72404220; the Philosophy and Social Science Research Special Foundation of Shaanxi Province, grant numbers 2023HZ1661, 2025QN0540; the Xi’an Social Science Planning Fund Project, grant number 25GL55.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

The authors thank all study participants for their support.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Measurement items.
Table A1. Measurement items.
ConstructsItemsSources
Performance Expectancy (PE)PE1: I find Generative AI useful in my daily life.Venkatesh et al. [64] and Cao et al. [65]
PE2: Using Generative AI increases my chances of achieving tasks that are important to me.
PE3: Using Generative AI helps me accomplish tasks more quickly.
PE4: Using Generative AI increases my productivity.
Effort Expectancy
(EE)
EE1: Learning how to use Generative AI is easy for me.Venkatesh et al. [64] and Cao et al. [65]
EE2: My interaction with Generative AI is clear and understandable.
EE3: I find Generative AI easy to use.
EE4: It is easy for me to become skillful at using Generative AI.
Social Influence (SI)SI1: People who are important to me think I should use Generative AI.Venkatesh et al. [64] and Cao et al. [65]
SI2: People who influence my behavior think I should use Generative AI.
SI3: People whose opinions I value prefer that I use Generative AI.
Facilitating
Conditions (FC)
FC1: I have the resources necessary to use Generative AI.Venkatesh et al. [64] and Cao et al. [65]
FC2: I have the knowledge necessary to understand Generative AI.
FC3: Generative AI is compatible with other technologies I use.
FC4: I can get help from others when I have difficulties using Generative AI.
Perceived Threats (PT)PT1: My fear of exposure to Generative AI’s risks is high.Baabdullah et al. [56]
and Liang et al. [32]
PT2: The extent of my anxiety about potential loss due to Generative AI’s risks is high.
PT3: The extent of my worry about Generative AI’s risks due to misuse is high.
Perceived Avoidability (PA)PA1: Taking everything into consideration (effectiveness of countermeasures, costs, and my confidence in employing countermeasures), the threat of Generative AI could be prevented.Liang et al. [32]
PA2. Taking everything into consideration (effectiveness of countermeasures, costs, and my confidence in employing countermeasures), I could protect myself from the threat of Generative AI.
PA3. Taking everything into consideration (effectiveness of countermeasures, costs, and my confidence in employing countermeasures), the threat of Generative AI was avoidable.
Perceived Artificial Intelligence Literacy (PAIL)PAIL1: I understand the basic concepts of Generative artificial intelligence.Grassini [67]
PAIL2: I believe l can contribute to Generative AI projects. (dropped)
PAIL3: I can judge the pros and cons of Generative AI.
PAIL4: I keep up with the latest Generative AI trends.
PAIL5: I’m comfortable talking about Generative AI with others.
PAIL6: I can think of new ways to use existing Generative AI tools.
Generative AI Usage
Intention (UI)
UI1: I intend to use Generative AI in the future.Venkatesh et al. [64] and Meng et al. [68]
UI2: I plan to use Generative AI in future.
UI3: I predict I will use Generative AI in the future.
Table A2. Common method variance (CMV).
Table A2. Common method variance (CMV).
ConstructIndicatorsSubstantive Factor
Loading (Ra)
Ra2Method Factor Loading (Rb)Rb2
PEPE→PE10.7570.5730.0400.002
PE→PE20.7040.4960.1130.013
PE→PE30.7310.534−0.0330.001
PE→PE40.8460.716−0.1310.017
EEEE→EE10.8050.648−0.0760.006
EE→EE20.8170.667−0.0950.009
EE→EE30.8020.6430.0090.000
EE→EE40.6610.4370.1620.026
SISI→SI10.8500.7230.0260.001
SI→SI20.8470.7170.0040.000
SI→SI30.8980.806−0.0300.001
FCFC→FC10.7140.5100.0370.001
FC→FC20.7550.570−0.0400.002
FC→FC30.7500.563−0.0160.000
FC→FC40.7970.6350.0160.000
PTPT→PT10.8200.672−0.0130.000
PT→PT20.8300.6890.0000.000
PT→PT30.7990.6380.0120.000
PAPA→PA10.8150.6640.0140.000
PA→PA20.7600.5780.0610.004
PA→PA30.9090.826−0.0710.005
PAILPAIL→PAIL10.8530.7280.0270.001
PAIL→PAIL30.7400.5480.1090.012
PAIL→PAIL40.9160.839−0.0310.001
PAIL→PAIL50.9160.839−0.0380.001
PAIL→PAIL60.7750.601−0.0710.005
UIUI→UI10.7760.6020.0710.005
UI→UI20.7740.5990.0740.005
UI→UI30.9300.865−0.1500.023
Average 0.653 0.005

References

  1. Demir, K.A.; Döven, G.; Sezen, B. Industry 5.0 and Human-Robot Co-Working. Procedia Comput. Sci. 2019, 158, 688–695. [Google Scholar] [CrossRef]
  2. Rožanec, J.M.; Novalija, I.; Zajec, P.; Kenda, K.; Tavakoli Ghinani, H.; Suh, S.; Veliou, E.; Papamartzivanos, D.; Giannetsos, T.; Menesidou, S.A.; et al. Human-Centric Artificial Intelligence Architecture for Industry 5.0. Appl. Int. J. Prod. Res. 2023, 61, 6847–6872. [Google Scholar] [CrossRef]
  3. Rane, N. ChatGPT and Similar Generative Artificial Intelligence (AI) for Smart Industry: Role, Challenges and Opportunities for Industry 4.0, Industry 5.0 and Society 5.0. Chall. Oppor. Ind. 2023, 31, 10–17. [Google Scholar] [CrossRef]
  4. Akundi, A.; Euresti, D.; Luna, S.; Ankobiah, W.; Lopes, A.; Edinbarough, I. State of Industry 5.0—Analysis and Identification of Current Research Trends. Appl. Syst. Innov. 2022, 5, 27. [Google Scholar] [CrossRef]
  5. Nahavandi, S. Industry 5.0—A Human-Centric Solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef]
  6. Shakya, R.; Vadiee, F.; Khalil, M. A Showdown of ChatGPT vs DeepSeek in Solving Programming Tasks. In Proceedings of the 2025 International Conference on New Trends in Computing Sciences (ICTCS), Amman, Jordan, 16–18 April 2025; pp. 413–418. [Google Scholar]
  7. Singh, S.; Bansal, S.; Saddik, A.E.; Saini, M. From ChatGPT to DeepSeek AI: A Comprehensive Analysis of Evolution, Deviation, and Future Implications in AI-Language Models. arXiv 2025, arXiv:2504.03219. [Google Scholar]
  8. CNNIC Generative Artificial Intelligence Application Development Report. Available online: https://www.cnnic.cn/NMediaFile/2025/0321/MAIN17425363970494TQVTVCI5P.pdf (accessed on 12 July 2025).
  9. Rawashdeh, A. The Consequences of Artificial Intelligence: An Investigation into the Impact of AI on Job Displacement in Accounting. J. Sci. Technol. Policy Manag. 2023, 16, 506–535. [Google Scholar] [CrossRef]
  10. Plikas, J.H.; Trakadas, P.; Kenourgios, D. Assessing the Ethical Implications of Artificial Intelligence (AI) and Machine Learning (ML) on Job Displacement Through Automation: A Critical Analysis of Their Impact on Society. In Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications; Farmanbar, M., Tzamtzi, M., Verma, A.K., Chakravorty, A., Eds.; Springer Nature: Singapore, 2024; pp. 313–325. [Google Scholar]
  11. Utomo, P.; Kurniasari, F.; Purnamaningsih, P. The Effects of Performance Expectancy, Effort Expectancy, Facilitating Condition, and Habit on Behavior Intention in Using Mobile Healthcare Application. Int. J. Community Serv. Engagem. 2021, 2, 183–197. [Google Scholar] [CrossRef]
  12. Zhou, T.; Lu, Y.; Wang, B. Integrating TTF and UTAUT to Explain Mobile Banking User Adoption. Comput. Hum. Behav. 2010, 26, 760–767. [Google Scholar] [CrossRef]
  13. Xia, Y.; Chen, Y. Driving Factors of Generative AI Adoption in New Product Development Teams from a UTAUT Perspective. Int. J. Hum.–Comput. Interact. 2025, 41, 6067–6088. [Google Scholar] [CrossRef]
  14. Yakubu, M.N.; David, N.; Abubakar, N.H. Students’ Behavioural Intention to Use Content Generative AI for Learning and Research: A UTAUT Theoretical Perspective. Educ. Inf. Technol. 2025; in press. [Google Scholar] [CrossRef]
  15. Liang, H.; Xue, Y. Understanding Security Behaviors in Personal Computer Usage: A Threat Avoidance Perspective. J. Assoc. Inf. Syst. 2010, 11, 394–413. [Google Scholar] [CrossRef]
  16. Ansari, M.F. A Quantitative Study of Risk Scores and the Effectiveness of AI-Based Cybersecurity Awareness Training Programs. Int. J. Smart Sens. Adhoc Netw. 2022, 3, 1. [Google Scholar] [CrossRef]
  17. Wang, C.; Wang, H.; Li, Y.; Dai, J.; Gu, X.; Yu, T. Factors Influencing University Students’ Behavioral Intention to Use Generative Artificial Intelligence: Integrating the Theory of Planned Behavior and AI Literacy. Int. J. Hum.–Comput. Interact. 2025, 41, 6649–6671. [Google Scholar] [CrossRef]
  18. Al-Abdullatif, A.M. Modeling Teachers’ Acceptance of Generative Artificial Intelligence Use in Higher Education: The Role of AI Literacy, Intelligent TPACK, and Perceived Trust. Educ. Sci. 2024, 14, 1209. [Google Scholar] [CrossRef]
  19. Mills, K.; Ruiz, P.; Lee, K.; Coenraad, M.; Fusco, J.; Roschelle, J.; Weisgrau, J. AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. Digital Promise 2024. [Google Scholar] [CrossRef]
  20. Long, D.; Magerko, B. What Is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–16. [Google Scholar]
  21. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI Literacy: An Exploratory Review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  22. Pan, L.; Luo, H.; Gu, Q. Incorporating AI Literacy and AI Anxiety Into TAM: Unraveling Chinese Scholars’ Behavioral Intentions Toward Adopting AI-Assisted Literature Reading. IEEE Access 2025, 13, 38952–38963. [Google Scholar] [CrossRef]
  23. WEF. The Future of Jobs Report 2023. Available online: https://www3.weforum.org/docs/WEF_Future_of_Jobs_2023.pdf (accessed on 12 July 2025).
  24. Zhang, B.; Dafoe, A. Artificial Intelligence: American Attitudes and Trends. SSRN 2019. [Google Scholar] [CrossRef]
  25. Ochieng, E.G.; Ominde, D.; Zuofa, T. Potential Application of Generative Artificial Intelligence and Machine Learning Algorithm in Oil and Gas Sector: Benefits and Future Prospects. Technol. Soc. 2024, 79, 102710. [Google Scholar] [CrossRef]
  26. Wang, S.; Zhang, H. Leveraging Generative Artificial Intelligence for Sustainable Business Model Innovation in Production Systems. Int. J. Prod. Res. 2025, 1–26. [Google Scholar] [CrossRef]
  27. Zhang, Q.; Zuo, J.; Yang, S. Research on the Impact of Generative Artificial Intelligence (GenAI) on Enterprise Innovation Performance: A Knowledge Management Perspective. J. Knowl. Manag. 2025; ahead-of-print. [Google Scholar] [CrossRef]
  28. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  29. Sharma, S.; Singh, G. Adoption of Artificial Intelligence in Higher Education: An Empirical Study of the UTAUT Model in Indian Universities. Int. J. Syst. Assur. Eng. Manag. 2024; in press. [Google Scholar] [CrossRef]
  30. Pramod, D.; Patil, K.P.; Kumar, D.; Singh, D.R.; Singh Dodiya, C.; Noble, D. Generative AI and Deep Fakes in Media Industry–An Innovation Resistance Theory Perspective. In Proceedings of the 2024 International Conference on Electrical Electronics and Computing Technologies (ICEECT), Greater Noida, India, 29–31 August 2024; Volume 1, pp. 1–5. [Google Scholar] [CrossRef]
  31. Singh, A.; Dwivedi, A.; Agrawal, D.; Singh, D. Identifying Issues in Adoption of AI Practices in Construction Supply Chains: Towards Managing Sustainability. Oper. Manag. Res. 2023, 16, 1667–1683. [Google Scholar] [CrossRef]
  32. Liang, H.; Xue, Y. Avoidance of Information Technology Threats: A Theoretical Perspective. MIS Q. 2009, 33, 71–90. [Google Scholar] [CrossRef]
  33. Domínguez Figaredo, D.; Stoyanovich, J. Responsible AI Literacy: A Stakeholder-First Approach. Big Data Soc. 2023, 10, 1–15. [Google Scholar] [CrossRef]
  34. Wang, B.; Rau, P.-L.P.; Yuan, T. Measuring User Competence in Using Artificial Intelligence: Validity and Reliability of Artificial Intelligence Literacy Scale. Behav. Inf. Technol. 2023, 42, 1324–1337. [Google Scholar] [CrossRef]
  35. Wilton, L.; Ip, S.; Sharma, M.; Fan, F. Where Is the AI? AI Literacy for Educators. In Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium; Rodrigo, M.M., Matsuda, N., Cristea, A.I., Dimitrova, V., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 180–188. [Google Scholar]
  36. Cho, S.L.; Jeong, S.C. A Study of Negative Factors Affecting Perceived Usefulness and Intention to Continue Using ChatGPT: Focusing on the moderating effect of AI literacy. J. Inf. Syst. 2024, 33, 1–18. [Google Scholar] [CrossRef]
  37. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  38. Du, L.; Lv, B. Factors Influencing Students’ Acceptance and Use Generative Artificial Intelligence in Elementary Education: An Expansion of the UTAUT Model. Educ. Inf. Technol. 2024, 29, 24715–24734. [Google Scholar] [CrossRef]
  39. Strzelecki, A.; ElArabawy, S. Investigation of the Moderation Effect of Gender and Study Level on the Acceptance and Use of Generative AI by Higher Education Students: Comparative Evidence from Poland and Egypt. Br. J. Educ. Technol. 2024, 55, 1209–1230. [Google Scholar] [CrossRef]
  40. Li, X.; Zappatore, M.; Li, T.; Zhang, W.; Tao, S.; Wei, X.; Zhou, X.; Guan, N.; Chan, A. GAI vs. Teacher Scoring: Which Is Better for Assessing Student Performance? IEEE Trans. Learn. Technol. 2025, 18, 569–580. [Google Scholar] [CrossRef]
  41. Jarrahi, M.H.; Askay, D.; Eshraghi, A.; Smith, P. Artificial Intelligence and Knowledge Management: A Partnership between Human and AI. Bus. Horiz. 2023, 66, 87–99. [Google Scholar] [CrossRef]
  42. Yang, Y. Exploring Factors Influencing L2 Learners’ Use of GAI-Assisted Writing Technology: Based on the UTAUT Model. Asia Pac. J. Educ. 2025, 23, 1–20. [Google Scholar] [CrossRef]
  43. Godoe, P.; Johansen, T.S. Understanding Adoption of New Technologies: Technology Readiness and Technology Acceptance as an Integrated Concept. J. Eur. Psychol. Stud. 2012, 3, 38–52. [Google Scholar] [CrossRef]
  44. Tahar, A.; Riyadh, H.A.; Sofyani, H.; Purnomo, W.E. Perceived Ease of Use, Perceived Usefulness, Perceived Security and Intention to Use E-Filing: The Role of Technology Readiness. J. Asian Financ. Econ. Bus. 2020, 7, 537–547. [Google Scholar] [CrossRef]
  45. Cabero-Almenara, J.; Palacios-Rodríguez, A.; Rojas Guzmán, H.; de los, Á.; Fernández-Scagliusi, V. Prediction of the Use of Generative Artificial Intelligence Through ChatGPT Among Costa Rican University Students: A PLS Model Based on UTAUT2. Appl. Sci. 2025, 15, 3363. [Google Scholar] [CrossRef]
  46. Goncalves, C.; Rouco, J.C.D. Proceedings of the International Conference on AI Research; Academic Conferences and Publishing Limited: Manchester, UK, 2024; ISBN 978-1-917204-28-6. [Google Scholar]
  47. Swargiary, K.; Roy, S.K. A Comprehensive Analysis of Influences on Higher Education Through Artificial Intelligence. SSRN 2024. [Google Scholar]
  48. Jo, H.; Bang, Y. Analyzing ChatGPT Adoption Drivers with the TOEK Framework. Sci. Rep. 2023, 13, 22606. [Google Scholar] [CrossRef] [PubMed]
  49. DiMaggio, P.J.; Powell, W.W. The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields. Am. Sociol. Rev. 1983, 48, 147. [Google Scholar] [CrossRef]
  50. Kurniasari, F.; Tajul Urus, S.; Utomo, P.; Abd Hamid, N.; Jimmy, S.Y.; Othman, I.W. Determinant Factors of Adoption of Fintech Payment Services in Indonesia Using the UTAUT Approach. Asian-Pac. Manag. Account. J. 2022, 17, 97–125. [Google Scholar] [CrossRef]
  51. Wang, L.; Xiao, J. Research on Influencing Factors of Learners’ Intention of Online Learning Behaviour in Open Education Based on UTAUT Model. In Proceedings of the 10th International Conference on Education Technology and Computers, Tokyo, Japan, 26–28 October 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 92–98. [Google Scholar]
  52. Levina, N.; Lifshitz-Assa, H. Is AI Ground Truth Really True? The Dangers of Training and Evaluating AI Tools Based on Experts’ Know-What. MIS Q. 2021, 45, 1501–1526. [Google Scholar] [CrossRef]
  53. Bagchi, K.; Udo, G. An Analysis of the Growth of Computer and Internet Security Breaches. Commun. Assoc. Inf. Syst. 2003, 12, 46. [Google Scholar] [CrossRef]
  54. Liu, C.; Wang, N.; Liang, H. Motivating Information Security Policy Compliance: The Critical Role of Supervisor-Subordinate Guanxi and Organizational Commitment. Int. J. Inf. Manag. 2020, 54, 102152. [Google Scholar] [CrossRef]
  55. Herath, T.; Chen, R.; Wang, J.; Banjara, K.; Wilbur, J.; Rao, H.R. Security Services as Coping Mechanisms: An Investigation into User Intention to Adopt an Email Authentication Service. Inf. Syst. J. 2012, 24, 61–84. [Google Scholar] [CrossRef]
  56. Liang, H.; Xue, Y.; Pinsonneault, A.; Wu, Y. “Andy” What Users Do Besides Problem-Focused Coping When Facing IT Security Threats: An Emotion-Focused Coping Perspective. MIS Q. 2019, 43, 373–394. [Google Scholar] [CrossRef]
  57. Pan, J.; Ding, S.; Wu, D.; Yang, S.; Yang, J. Exploring Behavioural Intentions toward Smart Healthcare Services among Medical Practitioners: A Technology Transfer Perspective. Int. J. Prod. Res. 2019, 57, 5801–5820. [Google Scholar] [CrossRef]
  58. Shiva, A.; Kushwaha, B.P.; Rishi, B. A Model Validation of Robo-Advisers for Stock Investment. Borsa Istanb. Rev. 2023, 23, 1458–1473. [Google Scholar] [CrossRef]
  59. Beaudry, A.; Pinsonneault, A. Understanding User Responses to Information Technology: A Coping Model of User Adaptation. MIS Q. 2005, 29, 493–524. [Google Scholar] [CrossRef]
  60. Almatrafi, O.; Johri, A.; Lee, H. A Systematic Review of AI Literacy Conceptualization, Constructs, and Implementation and Assessment Efforts (2019–2023). Comput. Educ. Open 2024, 6, 100173. [Google Scholar] [CrossRef]
  61. Bozkurt, A.; Sharma, R.C. Generative AI and Prompt Engineering: The Art of Whispering to Let the Genie Out of the Algorithmic World. Asian J. Distance Educ. 2023, 18, i–vii. [Google Scholar]
  62. Kenku, A.A.; Uzoigwe, T.L. Determinants of artificial intelligence anxiety: Impact of some psychological and organisational characteristics among staff of Federal Polytechnic Nasarawa, Nigeria. Afr. J. Psychol. Stud. Soc. Issues 2024, 27, 46–61. [Google Scholar]
  63. Al-Abdullatif, A.M.; Alsubaie, M.A. ChatGPT in Learning: Assessing Students’ Use Intentions through the Lens of Perceived Value and the Influence of AI Literacy. Behav. Sci. 2024, 14, 845. [Google Scholar] [CrossRef] [PubMed]
  64. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  65. Cao, G.; Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Understanding Managers’ Attitudes and Behavioral Intentions towards Using Artificial Intelligence for Organizational Decision-Making. Technovation 2021, 106, 102312. [Google Scholar] [CrossRef]
  66. Baabdullah, A.M. The Precursors of AI Adoption in Business: Towards an Efficient Decision-Making and Functional Performance. Int. J. Inf. Manag. 2024, 75, 102745. [Google Scholar] [CrossRef]
  67. Grassini, S. A Psychometric Validation of the PAILQ-6: Perceived Artificial Intelligence Literacy Questionnaire. In Proceedings of the 13th Nordic Conference on Human-Computer Interaction, Uppsala, Sweden, 13–16 October 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1–10. [Google Scholar]
  68. Meng, M.; Chenhui, L.; Zheng, H.; Xing, W. Consumer Usage of Mobile Visual Search in China: Extending UTAUT2 With Perceived Contextual Offer and Implementation Intention. J. Glob. Inf. Manag. 2024, 32, 1–29. [Google Scholar] [CrossRef]
  69. Andrews, J.E.; Ward, H.; Yoon, J. UTAUT as a Model for Understanding Intention to Adopt AI and Related Technologies among Librarians. J. Acad. Librariansh. 2021, 47, 102437. [Google Scholar] [CrossRef]
  70. Brislin, R.W. Translation and Content Analysis of Oral and Written Materials. In Handbook of Cross-Cultural Psychology; Triandis, H.C., Berry, J.W., Eds.; Allyn and Bacon: Boston, MA, USA, 1980; Volume 2, pp. 389–444. [Google Scholar]
  71. Hatlevik, O.E.; Throndsen, I.; Loi, M.; Gudmundsdottir, G.B. Students’ ICT Self-Efficacy and Computer and Information Literacy: Determinants and Relationships. Comput. Educ. 2018, 118, 107–119. [Google Scholar] [CrossRef]
  72. World Economic Forum; Accenture China. Blueprint to Action: China’s Path to AI-Powered Industry Transformation; Online White Paper; World Economic Forum: Geneva, Switzerland, 2025; Available online: https://www.weforum.org/publications/industries-in-the-intelligent-age-white-paper-series/china-transformation-of-industries/ (accessed on 12 July 2025).
  73. Hair, J.F.; Ringle, C.M.; Sarstedt, M. Editorial–Partial Least Squares: The Better Approach to Structural Equation Modeling? Long Range Plan. 2012, 45, 312–319. [Google Scholar] [CrossRef]
  74. Saraf, N.; Liang, H.; Xue, Y.; Hu, Q. How Does Organisational Absorptive Capacity Matter in the Assimilation of Enterprise Information Systems? Inf. Syst. J. 2013, 23, 245–267. [Google Scholar] [CrossRef]
  75. Ringle, C.M.; Wende, S.; Becker, J.-M. SmartPLS 4. Available online: https://www.smartpls.com/ (accessed on 13 July 2025).
  76. Guan, B.; Hsu, C. The Role of Abusive Supervision and Organizational Commitment on Employees’ Information Security Policy Noncompliance Intention. Internet Res. 2020, 30, 1383–1405. [Google Scholar] [CrossRef]
  77. Hsu, J.S.-C.; Shih, S.-P.; Hung, Y.W.; Lowry, P.B. The Role of Extra-Role Behaviors and Social Controls in Information Security Policy Effectiveness. Inf. Syst. Res. 2015, 26, 282–300. [Google Scholar] [CrossRef]
  78. Harman, H.H. Modern Factor Analysis; University of Chicago Press: Chicago, IL, USA, 1976; ISBN 978-0-226-31652-9. [Google Scholar]
  79. MacKenzie, S.B.; Podsakoff, P.M.; Podsakoff, N.P. Construct Measurement and Validation Procedures in MIS and Behavioral Research: Integrating New and Existing Techniques. MIS Q. 2011, 35, 293–334. [Google Scholar] [CrossRef]
  80. Bagozzi, R.P. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error: A Comment. J. Mark. Res. 1981, 7, 39–50. [Google Scholar]
  81. Henseler, J.; Ringle, C.M.; Sarstedt, M. A New Criterion for Assessing Discriminant Validity in Variance-Based Structural Equation Modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  82. Hu, L.; Bentler, P.M. Fit Indices in Covariance Structure Modeling: Sensitivity to Underparameterized Model Misspecification. Psychol. Methods 1998, 3, 424–453. [Google Scholar] [CrossRef]
  83. Yuan, Y.-P.; Tan, G.W.-H.; Ooi, K.-B. Does COVID-19 Pandemic Motivate Privacy Self-Disclosure in Mobile Fintech Transactions? A Privacy-Calculus-Based Dual-Stage SEM-ANN Analysis. IEEE Trans. Eng. Manag. 2024, 71, 2986–3000. [Google Scholar] [CrossRef]
  84. Cohen, J.; Cohen, P.; West, S.G.; Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 3rd ed.; Routledge: New York, NY, USA, 2013; ISBN 978-0-203-77444-1. [Google Scholar]
  85. Chin, W.W.; Marcolin, B.L.; Newsted, P.R. A Partial Least Squares Latent Variable Modeling Approach for Measuring Interaction Effects: Results from a Monte Carlo Simulation Study and an Electronic-Mail Emotion/Adoption Study. Inf. Syst. Res. 2003, 14, 189–217. [Google Scholar] [CrossRef]
  86. Yuan, Y.-P.; Dwivedi, Y.K.; Tan, G.W.-H.; Cham, T.-H.; Ooi, K.-B.; Aw, E.C.-X.; Currie, W. Government Digital Transformation: Understanding the Role of Government Social Media. Gov. Inf. Q. 2023, 40, 101775. [Google Scholar] [CrossRef]
  87. Latif, S.; Zou, Z.; Idrees, Z.; Ahmad, J. A Novel Attack Detection Scheme for the Industrial Internet of Things Using a Lightweight Random Neural Network. IEEE Access 2020, 8, 89337–89350. [Google Scholar] [CrossRef]
  88. Arpaci, I.; Bahari, M. A Complementary SEM and Deep ANN Approach to Predict the Adoption of Cryptocurrencies from the Perspective of Cybersecurity. Comput. Hum. Behav. 2023, 143, 107678. [Google Scholar] [CrossRef]
  89. Leong, L.-Y.; Hew, J.-J.; Lee, V.-H.; Tan, G.W.-H.; Ooi, K.-B.; Rana, N.P. An SEM-ANN Analysis of the Impacts of Blockchain on Competitive Advantage. Ind. Manag. Data Syst. 2023, 123, 967–1004. [Google Scholar] [CrossRef]
  90. Svozil, D.; Kvasnicka, V.; Pospichal, J. Introduction to Multi-Layer Feed-Forward Neural Networks. Chemom. Intell. Lab. Syst. 1997, 39, 43–62. [Google Scholar] [CrossRef]
  91. Lau, A.J.; Tan, G.W.-H.; Loh, X.-M.; Leong, L.-Y.; Lee, V.-H.; Ooi, K.-B. On the Way: Hailing a Taxi with a Smartphone? A Hybrid SEM-Neural Network Approach. Mach. Learn. Appl. 2021, 4, 100034. [Google Scholar] [CrossRef]
  92. Leong, L.-Y.; Hew, T.-S.; Ooi, K.-B.; Chong, A.Y.-L. Predicting the Antecedents of Trust in Social Commerce–A Hybrid Structural Equation Modeling with Neural Network Approach. J. Bus. Res. 2020, 110, 24–40. [Google Scholar] [CrossRef]
  93. Al-Emran, M.; Al-Qaysi, N.; Al-Sharafi, M.A.; Khoshkam, M.; Foroughi, B.; Ghobakhloo, M. Role of Perceived Threats and Knowledge Management in Shaping Generative AI Use in Education and Its Impact on Social Sustainability. Int. J. Manag. Educ. 2025, 23, 101105. [Google Scholar] [CrossRef]
  94. Joo, K.; Kim, H.M.; Hwang, J. Application of the Extended Unified Theory of Acceptance and Use of Technology to a Robotic Golf Caddy: Health Consciousness as a Moderator. Appl. Sci. 2024, 14, 9915. [Google Scholar] [CrossRef]
  95. Hoai, N.T. Factors Affecting Students’ Intention to Use Artificial Intelligence in Learning: An Empirical Study in Vietnam. Edelweiss Appl. Sci. Technol. 2025, 9, 1533–1540. [Google Scholar] [CrossRef]
  96. Al-Maroof, R.S.; Alhumaid, K.; Akour, I.; Salloum, S. Factors That Affect E-Learning Platforms after the Spread of COVID-19: Post Acceptance Study. Data 2021, 6, 49. [Google Scholar] [CrossRef]
  97. Fan, C.; Hu, M.; Shangguan, Z.; Ye, C.; Yan, S.; Wang, M.Y. The Drivers of Employees’ Active Innovative Behaviour in Chinese High-Tech Enterprises. Sustainability 2021, 13, 6032. [Google Scholar] [CrossRef]
  98. Farh, J.-L.; Hackett, R.D.; Liang, J. Individual-Level Cultural Values as Moderators of Perceived Organizational Support–Employee Outcome Relationships in China: Comparing the Effects of Power Distance and Traditionality. Acad. Manag. J. 2007, 50, 715–729. [Google Scholar] [CrossRef]
Figure 1. Research model.
Figure 1. Research model.
Systems 13 00639 g001
Figure 2. Test results. Note: * p < 0.05, *** p < 0.001, ns: not significant.
Figure 2. Test results. Note: * p < 0.05, *** p < 0.001, ns: not significant.
Systems 13 00639 g002
Figure 3. The moderating effect of perceived AI literacy.
Figure 3. The moderating effect of perceived AI literacy.
Systems 13 00639 g003
Figure 4. The ANN model.
Figure 4. The ANN model.
Systems 13 00639 g004
Table 1. Respondent characteristics (n = 583).
Table 1. Respondent characteristics (n = 583).
DemographicsnPercentage
GenderMale29650.8
Female28749.2
Age20–2919433.3
30–3916328.0
40–4917229.5
50–59549.3
Educational levelHigh school122.1
College315.3
Undergraduate38866.6
Graduate14825.4
Doctoral40.7
IndustryService8915.3
Financial9416.1
Education12822.0
IT16828.8
Manufacturing10417.8
Work experience<5 years20334.8
5–10 years22037.7
>10 years16027.4
Table 2. Loading, Cronbach’s alpha, CR, and AVE.
Table 2. Loading, Cronbach’s alpha, CR, and AVE.
ConstructsItemsLoadingsCronbach’s Alpha (α)Composite Reliability (CR)Average Variance Extracted (AVE)
Performance Expectancy (PE)PE10.7890.7530.8440.574
PE20.784
PE30.726
PE40.730
Effort
Expectancy (EE)
EE10.7550.7730.8540.595
EE20.750
EE30.815
EE40.763
Social
Influence
(SI)
SI10.8750.8320.8990.748
SI20.863
SI30.856
Facilitating Conditions
(FC)
FC10.7770.7460.8390.566
FC20.736
FC30.704
FC40.790
Perceived Threats
(PT)
PT10.8130.750.8570.666
PT20.829
PT30.807
Perceived Avoidability (PA)PA10.8350.7720.8680.687
PA20.810
PA30.840
Perceived
AI Literacy
(PAIL)
PAIL10.8820.8950.9230.708
PAIL30.817
PAIL40.897
PAIL50.886
PAIL60.709
Generative AI Usage Intention (UI)UI10.8380.7660.8650.681
UI20.835
UI30.801
Table 3. Discriminant validity.
Table 3. Discriminant validity.
PEEESIFCPTPAPAILUI
PE0.758
EE0.4040.771
SI0.5590.2070.865
FC0.4480.3060.5030.752
PT0.1720.2550.2910.3130.816
PA0.5080.3760.4660.3910.1750.829
PAIL0.3840.2850.4710.4340.2220.5020.841
UI0.5260.4800.3510.4180.1110.5000.5540.825
Note: Bolded diagonal values represent square roots of AVEs.
Table 4. HTMT (Heterotrait–Monotrait criterion).
Table 4. HTMT (Heterotrait–Monotrait criterion).
PEEESIFCPTPAPAILUI
PE
EE0.518
SI0.7020.248
FC0.5950.3910.647
PT0.2250.3340.3710.420
PA0.6650.4810.5800.5160.226
PAIL0.4670.3350.5540.5320.2740.602
UI0.6820.6220.4290.5390.1480.6410.657
Table 5. RMSE value of ANN models (output: generative AI usage intention).
Table 5. RMSE value of ANN models (output: generative AI usage intention).
ANNTrainingTesting
10.0670.055
20.0710.058
30.0660.054
40.0650.066
50.0660.062
60.0680.062
70.0680.045
80.0660.058
90.0660.071
100.0700.060
Mean0.0670.059
Standard Deviation0.0020.007
Table 6. Sensitivity analysis.
Table 6. Sensitivity analysis.
NetworkPEEEFCPTPAPAIL
10.2390.1960.1500.0580.0810.277
20.340.1520.0990.0250.1790.206
30.1880.2190.1420.0770.1320.243
40.2010.1770.1590.0710.1290.264
50.1960.2010.1490.0370.1720.245
60.2370.2150.1660.0170.1660.200
70.1600.2290.1280.0750.1140.293
80.1980.1950.1700.0350.1750.227
90.1840.1370.1940.1040.0840.297
100.2790.1990.0860.0760.1290.232
Average Importance0.2220.1920.1440.0580.1360.248
Normalized Importance (%)89.45%77.29%58.09%23.15%54.79%100%
Table 7. Comparison between the results derived from PLS-SEM and ANN.
Table 7. Comparison between the results derived from PLS-SEM and ANN.
PLS PathPLS-SEM:
Path Coefficient
Normalized
Importance
Ranking
(PLS-SEM)
Ranking
(ANN)
Remark
PE-UI0.24589.4522Match
EE-UI0.24277.2933Match
FC-UI0.10658.0944Match
PT-UI−0.07523.1566Match
PA-UI0.09954.7955Match
PAIL-UI0.330100.0011Match
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.; Yang, L.; Dong, X.; Li, X. Factors Influencing Generative AI Usage Intention in China: Extending the Acceptance–Avoidance Framework with Perceived AI Literacy. Systems 2025, 13, 639. https://doi.org/10.3390/systems13080639

AMA Style

Liu C, Yang L, Dong X, Li X. Factors Influencing Generative AI Usage Intention in China: Extending the Acceptance–Avoidance Framework with Perceived AI Literacy. Systems. 2025; 13(8):639. https://doi.org/10.3390/systems13080639

Chicago/Turabian Style

Liu, Chenhui, Libo Yang, Xinyu Dong, and Xiaocui Li. 2025. "Factors Influencing Generative AI Usage Intention in China: Extending the Acceptance–Avoidance Framework with Perceived AI Literacy" Systems 13, no. 8: 639. https://doi.org/10.3390/systems13080639

APA Style

Liu, C., Yang, L., Dong, X., & Li, X. (2025). Factors Influencing Generative AI Usage Intention in China: Extending the Acceptance–Avoidance Framework with Perceived AI Literacy. Systems, 13(8), 639. https://doi.org/10.3390/systems13080639

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop