Previous Article in Journal
AI-Driven Anomaly Detection in E-Commerce Services: A Deep Learning and NLP Approach to the Isolation Forest Algorithm Trees
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

When Generative AI Meets Abuse: What Are You Anxious About?

1
Department of Business, Gachon University, Seongnam 13120, Republic of Korea
2
School of Information Science and Engineering, Zhejiang Sci-Tech University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 215; https://doi.org/10.3390/jtaer20030215 (registering DOI)
Submission received: 13 June 2025 / Revised: 7 August 2025 / Accepted: 12 August 2025 / Published: 14 August 2025

Abstract

The rapid progress of generative artificial intelligence (AI) has sparked growing concerns regarding its misuse, privacy risks, and ethical issues. This study investigates the interplay between Generative AI Abuse Anxiety, trust, perceived usefulness, acceptance, and the intention to use it. Using variance-based partial least squares (PLS-SEM), we analyze 318 valid survey responses. The findings reveal that Generative AI Abuse Anxiety negatively impacts trust, perceived usefulness, acceptance, and the intention to use generative AI. Additionally, different subdimensions of trust play significant roles in influencing users’ technology acceptance and intention to use it, though the specific mechanisms differ. This research extends the applicability of the technology acceptance model to the generative AI context and enriches the multidimensional framework of trust studies.

1. Introduction

Generative AI has demonstrated exceptional capabilities in content generation and autonomous decision-making. It is capable of executing repetitive and tedious tasks with higher precision and efficiency [1]. Its applications are broad, spanning various fields. For instance, MidJourney can generate realistic images and videos, ChatGPT showcases disruptive capabilities in content creation, and AIVA simplifies the process of composing and adapting original music. These technological advancements are significantly reshaping industries such as those of entertainment, art, and education, enabling users to accomplish tasks that previously required specialized skills.
As technology continues to evolve, generative AI is becoming increasingly intelligent and anthropomorphized. Through natural language understanding and context-driven reasoning, generative AI demonstrates a capacity for continuous adaptation and improvement through autonomous learning mechanisms [1]. It can also simulate human-like behavior, fostering deeper interactions with users [2]. This technological evolution transcends the functional roles of traditional tools, making generative AI a technology agent with social characteristics [3]. This evolution in human–computer interaction brings technology closer to human needs. As a result, technological innovations can be applied in more diverse and practical ways.
However, as generative AI develops rapidly, its risks of misuse and potential ethical issues have also garnered widespread attention [4,5]. Notably, concerns regarding fraud, misinformation, and privacy violations are prevalent. A prime example is the misuse of Deepfake technology [6]. Deepfake technology uses deep learning models to create highly realistic images, videos, or audio. These outputs can be maliciously used to forge identities, spread false information, or conduct extortion [7]. Such abuses not only threaten individual privacy and social order but also undermine public trust in generative AI. A key factor behind this anxiety is the lack of transparency and accountability. When users cannot understand or influence the system’s decision-making process, they experience what is known as the “black box effect” [8]. This effect increases public anxiety and unease about generative AI [9].
Trust is a fundamental prerequisite for understanding users’ perceptions of technology and evaluating whether individuals will adopt it [10]. Trust in technology should not be viewed as a singular or holistic concept [11]. It often involves users’ perceptions across multiple dimensions of the technology system [12]. For instance Glikson and Woolley [13] suggest that trust in AI can be divided into cognitive and emotional dimensions. Cognitive trust emphasizes the functional attributes of technology, including tangibility, transparency, and task characteristics. In contrast, emotional trust relates to the social characteristics of technology, such as anthropomorphism and the immediacy of action. These elements are particularly crucial in human–computer interaction. Building on this, Choung et al. [14] further conceptualize AI trust as a binary construct, separating human-like trust from functional-like trust. These two dimensions reflect users’ perceptions of AI’s anthropomorphic characteristics and its technical functionality. Additionally, Botsman [15] proposed a distributed trust mechanism, which reveals the extended effect of users’ trust in platforms.
Based on existing research on technology trust, this study constructs a threedimensional trust framework. First, trust in websites focuses on users’ evaluation of the technical functionality of the platforms or providers hosting generative AI, including trust in IT Quality, Security, and Privacy protection [16]. Second, Human-like Trust connects generative AI with human trust, operationalized through Ability, Benevolence, and Integrity [14]. This dimension captures users’ recognition of the social features of generative AI. Lastly, System-like Trust concerns users’ rational judgments of generative AI at the subjective level, measured by Fairness, Accountability, Transparency, and Explainability [17]. This dimension enables users to comprehend how algorithms make decisions and provides critical insights into the credibility of the generative AI technology itself.
In summary, this study aims to integrate users’ anxiety about the potential abuse of generative AI technology with their trust perceptions. Based on the technology acceptance model (TAM), it constructs an extended theoretical framework. We will innovatively introduce Generative AI Abuse Anxiety (GAIA). This variable is designed to capture users’ psychological concerns about potential abuse scenarios of generative AI. By revealing how GAIA directly impacts users’ trust, its perceived usefulness (PU), the Acceptance of Generative AI (AGIA), and the intention to use (ITU) it, this study aims to enhance the understanding of the driving mechanisms behind users’ technology adoption behaviors. Additionally, we will systematically assess how different trust dimensions influence technology acceptance, thereby enriching the research paradigm on technology trust. The results will provide valuable empirical evidence to guide the regulation of generative AI technologies and the development of governance frameworks based on user behavior.

2. Literature Review and Hypotheses

2.1. Generative AI Abuse Anxiety

AI anxiety arises from the uncertainties associated with the swift advancement of AI technologies and their potential adverse effects [9]. A comparable concept to AI anxiety is computer anxiety. The latter refers to negative or uneasy emotions experienced by individuals when using computers [18]. This is usually related to a lack of confidence or proficiency in using computers. In contrast, AI anxiety is more closely linked to the high autonomy and unpredictability of AI technology itself [17].
Beyond the inherent unpredictability of the technology, concerns about AI’s potential biases and ethical implications also contribute significantly to AI anxiety. During the training process, AI may introduce human implicit biases [19]. For example, it may reinforce stereotypes related to sexual orientation, race, or religion [20]. This has raised concerns among scholars about AI’s potential to exacerbate social inequalities and discrimination [21]. Moreover, the lack of sufficient regulatory frameworks has amplified the public’s fear of AI becoming uncontrollable. In high-risk areas such as autonomous driving and medical diagnosis, AI’s autonomous decision-making could lead to unforeseen consequences [22]. Additionally, AI may cause job displacement, especially in industries like manufacturing, services, and finance. This further exacerbates public concerns about future employment prospects. Therefore, previous AI anxiety research has expressed concerns about the lack of transparency, biases and ethical issues, risks of the loss of control, and the potential for job displacement [23].
During the data-driven model phase, AI was mainly viewed as an analytical tool, suitable for processing and analyzing large volumes of data [19]. As deep learning models have become increasingly prevalent in areas such as natural language processing, AI has acquired the capability to carry out generative tasks [24]. Generative AI demonstrates computational techniques for creating realistic text, images, and audio, and is capable of producing human-like outputs [25]. However, this also brings new potential risks and harms. Generative AI technology can be maliciously used. For instance, Deepfake technology serves as a representative case [7,26]. These complex and covert abuse risks not only extend the previous concerns about analytical AI but also broaden the scope of AI anxiety. This study introduces the term “generative AI abuse anxiety”. It refers to users’ psychological concerns about generative AI technology being used for unethical, illegal, or harmful activities.

2.2. Technology Acceptance Model

The TAM, proposed by Davis [27], is a widely recognized framework for understanding and predicting technology adoption. Previous research has demonstrated its effectiveness and robustness [28]. Central to the TAM are the concepts of PU and the perceived ease of use (PEOU) [29]. PU refers to the extent to which users believe that a technology improves their ability to perform tasks, while PEOU reflects users’ perception of how easy the technology is to use [30]. Both factors directly influence users’ acceptance of the technology, which in turn affects their ITU [31]. ITU is a strong predictor of both technology adoption and usage behavior [32]. The TAM has found extensive application in fields such as information technology and e-commerce [33].
In the original TAM, PEOU is described as “the degree to which a person believes that using technology will be free of effort” [27]. This implies that engaging with the technology does not require additional learning or a cognitive load [31]. Users can clearly understand the expected outcomes of using the technology. In fact, the rapid popularity of generative AI tools such as ChatGPT can be largely attributed to their highly user-friendly and accessible interfaces. However, ease of use at the interface level often coexists with considerable complexity and unpredictability at the system level. While users may find it easy to operate generative AI tools, they often struggle to understand, evaluate, or predict the underlying algorithmic logic and generated outputs. This apparent interface simplicity does not necessarily translate into genuine ease of use in understanding system behavior or anticipating outputs. As a result, the direct application of PEOU in this context becomes complicated.
In practice, users of generative AI must frequently evaluate the validity, reliability, and trustworthiness of system outputs. This ongoing evaluation imposes a much higher cognitive load than what is implied by the original “effortless” notion in the TAM. Therefore, although interface usability remains important, the user experience with generative AI cannot be fully captured by the traditional definition of PEOU.
To address this limitation, we introduce trust as an alternative factor. Trust influences how individuals perceive and use technology [34,35,36]. In human–computer interaction, users may worry that the decisions made by generative AI may deviate from expectations or be misused [4]. These perceived risks often stem from users’ concerns about the control they have when using generative AI. As such, the acceptance of and willingness to use generative AI are largely related to trust [37]. Research has shown that high visibility and transparency can enhance users’ satisfaction with system performance and increase technology acceptance [17]. Fairness and accountability can further improve user satisfaction and promote continued use [5]. Vorm and Combs [38] also found that individuals’ willingness to adopt intelligent systems (such as autonomous vehicles) is primarily driven by trust in the system, rather than the simplicity of the interface. Accordingly, we replaced PEOU with a multidimensional trust framework in our model.
Additionally, the TAM allows researchers to construct models by adding external variables, integrating other potential factors that may influence users’ adoption of a particular technology [39]. Consequently, we also incorporate GAIA as an external variable, modifying and extending the original TAM structure.

2.3. Trust

Trust, as a psychological construct, is grounded in specific social exchange relationships [40]. According to social exchange theory, interpersonal interactions are based on the principle of reciprocity [41]. Individuals engage in exchanges to receive benefits, with the expectation that the other party will fulfill certain obligations within the relationship. In interactions with generative AI, trust-building also adheres to similar social exchange norms [10]. Previous research has expanded the concept of interpersonal trust to human–computer interactions, particularly in fields like automation and robotics [42]. Dimensions such as ability, benevolence, and integrity are considered to enhance the human-like qualities of machines.
Although AI technology itself lacks the capacity for moral judgment, as technology gradually acquires more human-like characteristics, AI behaviors begin to exhibit dimensions that are similar to interpersonal trust [14]. This allows users to attribute human-like trust qualities to AI. Došilović et al. [43] highlighted that for AI systems to be deemed trustworthy, they must fulfill key principles such as benevolence, non-maleficence, and integrity. Califf et al. [35] also argued that human-like trust can be operationalized through the dimensions of ability, benevolence, and integrity. Based on this, we suggest that trust in generative AI will follow a comparable pattern. Specifically, ability refers to whether generative AI has sufficient capability and effectiveness to complete a given task, thus demonstrating high reliability and consistency. Benevolence emphasizes whether generative AI benefits the users’ interests during its development, deployment, and use. Integrity refers to whether generative AI adheres to ethical and moral standards, thereby avoiding potential harm to users or society.
The interaction between users and AI can also be understood as a social exchange process. It is essential to recognize that all types of social exchange carry uncertainty and risk [44]. Within this framework, trust in generative AI is grounded in rational evaluation and risk management [37]. When using generative AI, users evaluate the potential risks (such as time and energy investment, data leakage, etc.) and assess the potential benefits (such as improved creative efficiency, personalized services, etc.) [45]. When users perceive the costs of interaction to be too high or the provided service does not meet their expectations, they may lose trust in the technology. Conversely, when generative AI significantly enhances the anticipated benefits, the technology is considered more trustworthy.
In this exchange process, systemic trust in the technology as a whole is often more complex than interpersonal trust. This is because ensuring AI transparency and explainability is more challenging than it is with other technologies, especially when using non-linear models like deep learning. Transparency involves the degree to which users comprehend the operational principles and decision-making processes of AI systems [13]. It emphasizes whether users have access to detailed information about how algorithms function and how decisions are made. In other words, transparency reflects whether users are able to comprehend how generative AI systems operate. Explainability pertains to the capability to clarify how AI functions and the reasons behind its specific decisions [17]. These explanations should be simple and understandable. Additionally, fairness and accountability are also important in establishing systemic trust in AI technology. Fairness requires that the algorithmic process does not lead to discriminatory or unjust outcomes [21]. Accountability ensures that AI owners, designers, or users are held responsible for their technological actions and decision-making outcomes [46]. It concerns whether responsibility can be clearly attributed when AI systems cause problems or are misused, and whether those responsible can be identified and held accountable. The core of accountability lies in the traceability of responsibility and the clear identification of those responsible, rather than in the transparency of the algorithm’s operational mechanisms. Users are more likely to perceive an algorithm as trustworthy and useful when they believe it is fairer, more accountable, transparent, and explainable [47,48]. Therefore, we believe the FATE framework can effectively assess systemic trust in generative AI.
Previous research has shown that trust in e-commerce websites helps increase consumers’ willingness to make purchases on those sites [49,50]. For instance, Seckler et al. [51] highlighted that the quality of a website is a significant predictor of users’ trust in it. Security and privacy directly affect users’ sense of safety when engaging in transactions and interactions on a website [35], particularly in virtual environments involving high uncertainty and risk. A lack of security can suppress users’ willingness to engage in transactions online [50]. Trust in websites generally encompasses users’ perceptions of the platform’s usefulness and reliability, and their satisfaction with its features and services [52]. This concept of trust is similar to the trust users have in generative AI. In other words, trust in websites reflects whether users are willing to rely on the platform to use generative AI for content creation.
In this study, we operationalize trust in websites as IT Quality, Security and Privacy. IT Quality reflects the website’s information quality, system quality, and service quality. Importantly, IT Quality not only embodies users’ confidence in the technological infrastructure, but also directly influences perceived ease of use by ensuring that the system is accessible, intuitive, and user-friendly. Therefore, IT Quality bridges the traditional concept of PEOU and broader dimensions of trust, serving as both a functional and theoretical proxy for PEOU in our model. Security and Privacy focus on the website’s security features, defense mechanisms, and information protection [16].

2.4. Hypothesis and Theoretical Model

Drawing on the literature and theories presented in Section 2.1, Section 2.2 and Section 2.3, the following hypotheses are proposed:
H1. 
GAIA negatively affects PU.
H2. 
PU positively affects the AGAI.
H3. 
GAIA negatively affects the AGAI.
H4. 
GAIA negatively affects Trust in Websites: (a) IT Quality, (b) Security and Privacy.
H5. 
GAIA negatively affects Human-like Trust: (a) Ability, (b) Benevolence, and (c) Integrity.
H6. 
GAIA negatively affects System-like Trust: (a) Fairness, (b) Accountability, (c) Transparency, and (d) Explainability.
H7. 
Trust in Websites ((a) IT Quality, (b) Security and Privacy) positively affects PU.
H8. 
Human-like Trust ((a) Ability, (b) Benevolence, and (c) Integrity) positively affects PU.
H9. 
System-like Trust ((a) Fairness, (b) Accountability, (c) Transparency, and (d) Explainability) positively affects PU.
H10. 
Trust in Websites ((a) IT Quality, (b) Security and Privacy) positively affects the AGAI.
H11. 
Human-like Trust ((a) Ability, (b) Benevolence, and (c) Integrity) positively affects the AGAI.
H12. 
System-like Trust ((a) Fairness, (b) Accountability, (c) Transparency, and (d) Explainability) positively affects the AGAI.
H13. 
PU positively affects ITU.
H14. 
The AGAI positively affects ITU.
The theoretical model is shown in Figure 1.

3. Method

3.1. Measures and Questionnaire

This study employed a five-point Likert scale for all the questionnaires. The GAIA scale was based on the AI anxiety scale created by Wang and Wang [53], utilizing three items from the social-technical blind spot dimension of that scale.
The scales for PU (4 items), AGAI (4 items), and ITU (3 items) were adapted from the scale developed by Choung et al. [14].
The Trust in Websites scale, adapted from Hsu et al. [16], consists of five items, covering two dimensions: IT Quality (2 items) and Security and Privacy (3 items).
Human-like Trust includes three dimensions: Ability, Benevolence, and Integrity. The Ability dimension was assessed using three items adapted from Lankton et al. [12]. Benevolence and Integrity were each measured with three items derived from Choung et al. [14].
System-like Trust includes four dimensions: Fairness, Accountability, Transparency, and Explainability. This study adapted and expanded upon the scale developed by Shin [17], with three items for each dimension. The full list of items is provided in Appendix A.

3.2. Participants and Data Collection

Data for this study were collected using snowball sampling. We randomly invited users with experience in using generative AI to participate in an online survey. To increase participants’ willingness to engage in the survey and refer others, a reward system was designed. Participants who successfully completed the survey would receive a $2 reward. An additional $0.7 was offered for each successful referral, with a maximum of two referrals per participant. All participants provided informed consent prior to completing the survey. Participation was voluntary and anonymous. The data collected were used solely for research purposes, and no personally identifiable or sensitive information was disclosed. The online survey lasted for one week. After eliminating questionnaires with excessive identical responses or answers showing clear patterns, 318 valid questionnaires were retained.
As shown in Table 1, the sample consisted of 53.1% male and 46.9% female participants. The age distribution was broad: 18.2% were under 23, 33.6% were aged 23–30, 32.7% were aged 31–40, 12.3% were aged 41–50, and 3.1% were over 50. Regarding educational backgrounds, 12.9% held an associate’s degree or below, 44.7% held a bachelor’s degree, and 42.6% held a graduate degree. For generative AI usage frequency, 12.3% reported daily use, 31.1% reported weekly use, 21.1% reported biweekly use, and 35.5% reported monthly use.
The findings of Harman’s single-factor test revealed that the largest variance explained by a single factor was 36.995%, which is below the 50% threshold for total variance. This suggests that common method bias is not a significant issue in the collected data.

3.3. Measurement Model Assessment

SmartPLS 4 was employed in this study to assess the reliability and validity of the measurement model [54]. As shown in Table 2, all constructs had Cronbach’s alpha values exceeding 0.7, reflecting strong internal consistency. The composite reliability (CR) values also exceeded 0.7, and the average variance extracted (AVE) values were greater than 0.5, demonstrating satisfactory convergent validity. Additionally, the variances inflation factors (VIFs) were all below 5, indicating that multicollinearity does not pose a significant issue in this study.
The Fornell–Larcker criterion and the Heterotrait–Monotrait (HTMT) ratio method were employed to evaluate discriminant validity [54]. As shown in Table 3 and Table 4, the square roots of the AVE for each construct exceeded the correlations between that construct and the others. All HTMT values were below the threshold of 0.85, confirming that discriminant validity is good.
To further examine potential overlaps among subdimensions in the three-dimensional trust framework, two rounds of exploratory factor analysis (EFA) were conducted. In the first round, principal component analysis with Varimax rotation revealed that “accountability” and “transparency” under System-like Trust tended to load on the same factor, indicating a degree of aggregation. Nevertheless, the three trust types (Trust in Websites, Human-like Trust, and System-like Trust) remained clearly differentiated in the factor structure, providing preliminary support for the conceptual distinctiveness of the three trust dimensions. In the second round, with the number of factors fixed at 13, all subdimensions showed high loadings (>0.70) on their respective factors, with no significant cross-loadings. This further confirms the discriminant validity of each subdimension and the structural clarity of the three-dimensional trust framework.
Building on the EFA results, we constructed a second-order measurement model. Trust in Websites, Human-like Trust, and System-like Trust were modeled as second-order factors, each comprising several first-order subdimensions. The model demonstrated good fit ( χ 2 / d f = 1.742, CFI = 0.955, IFI = 0.955, TLI = 0.949, RMSEA = 0.048, SRMR = 0.0782), and all standardized loadings exceeded accepted thresholds (see Appendix A for details), supporting the structural integrity and measurement consistency of the three-dimensional trust framework.
Furthermore, the inter-factor correlations among the three second-order constructs were 0.637 (Trust in Websites and Human-like Trust), 0.491 (Trust in Websites and System-like Trust), and 0.579 (System-like Trust and Human-like Trust), all well below the threshold of 0.85. This provides additional evidence for the discriminant validity and necessity of distinguishing each trust dimension.

3.4. Structural Model

To assess the internal generalizability and robustness of the measurement model, measurement invariance was tested across key demographic subgroups (gender, age, and education) using the MICOM procedure in SmartPLS [55]. The results demonstrated that all constructs achieved both partial and full measurement invariance, indicating that the measurement model is stable and consistent across these major subgroups (see Appendix A for details).
The structural model test results are presented in Figure 2 and Table 5. This study evaluated the model using path coefficients and adjusted R 2 values. Path coefficients reflect the strength of the relationships among variables, while the adjusted R 2 value indicates the model’s explanatory power. A higher R 2 value suggests a stronger explanatory power. The results show that the three trust dimensions and GAIA together account for 70.3% of the variance in Acceptance. PU and AGAI explain 52.8% of the variance in users’ ITU. This confirms the significant explanatory power of the proposed model.
Additionally, GAIA showed significant negative correlations with PU ( β = 0.069 , p < 0.1 ), AGAI ( β = 0.082 , p < 0.1 ), Trust in Websites ( β = 0.270 , p < 0.001 ), Human-like Trust ( β = 0.344 , p < 0.001 ), and System-like Trust ( β = 0.369 , p < 0.001 ). PU showed significant positive correlations with Acceptance ( β = 0.152 , p < 0.01 ) and ITU ( β = 0.478 , p < 0.001 ). Moreover, Acceptance and ITU were significantly positively correlated ( β = 0.313 , p < 0.001 ).
It should be noted, however, that GAIA shows negative associations with PU ( β = 0.069 , p < 0.1 ) and AGAI ( β = 0.082 , p < 0.1 ). Both effects are quite small, and their statistical significance is only marginal. This suggests that, while these relationships are statistically observable in this sample, their actual impact on users’ perceptions of the usefulness and acceptance is limited. Therefore, further analysis is warranted to confirm the validity and generalizability of these effects.

3.5. Structural Model for Trust in Websites

This study verified hypotheses H4–H12 using three independent structural models. The structural model results for Trust in Websites are presented in Figure 3 and Table 6. GAIA was significantly negatively correlated with all subdimensions of Trust in Websites (IT Quality, β = 0.216 , p < 0.001 ; Security and Privacy, β = 0.249 , p < 0.001 ). At the same time, all subdimensions of Trust in Websites significantly positively impacted PU (IT Quality, β = 0.372 , p < 0.001 ; Security and Privacy, β = 0.411 , p < 0.001 ) and AGAI (IT Quality, β = 0.171 , p < 0.01 ; Security and Privacy, β = 0.200 , p < 0.01 ).
In addition, we conducted supplementary mediation analyses to examine whether IT Quality, as a proxy for PEOU, influences user acceptance and PU through trust dimensions. The following three mediation pathways were tested:
  • IT Quality → PU → AGAI
  • IT Quality → System-like Trust → AGAI
  • IT Quality → Human-like Trust → AGAI
The results showed that IT Quality significantly predicted both System-like Trust and Human-like Trust, which in turn exerted significant positive effects on Acceptance (see Appendix A). This indicates that users’ perception of ease of use (as captured by IT Quality) indeed enhances Acceptance by improving both perceived usefulness and perceived trustworthiness. In other words, IT Quality has conceptually and structurally taken over the role of PEOU in the traditional TAM. It not only relates to PU, but also replaces the possible mediating effects of PEOU through trust dimensions, forming a richer and more complete pathway of usability’s influence.

3.6. Structural Model for Human-like Trust

The structural model results for Human-like Trust are displayed in Figure 4 and Table 7. GAIA was significantly negatively correlated with all subdimensions of Human-like Trust (Ability, β = 0.323 , p < 0.001 ; Benevolence, β = 0.282 , p < 0.001 ; Integrity, β = 0.239 , p < 0.001 ). Additionally, all subdimensions of Human-like Trust significantly positively impacted PU (Ability, β = 0.182 , p < 0.01 ; Benevolence, β = 0.232 , p < 0.001 ; Integrity, β = 0.159 , p < 0.01 ) and AGAI (Ability, β = 0.095 , p < 0.01 ; Benevolence, β = 0.240 , p < 0.001 ; Integrity, β = 0.211 , p < 0.001 ).

3.7. Structural Model for System-like Trust

The structural model results for System-like Trust are presented in Figure 5 and Table 8. GAIA was significantly negatively correlated with all subdimensions of System-like Trust (Fairness, β = 0.301 , p < 0.001 ; Accountability, β = 0.244 , p < 0.001 ; Transparency, β = 0.238 , p < 0.001 ; Explainability, β = 0.346 , p < 0.001 ). Regarding the influence on PU, Fairness ( β = 0.345 , p < 0.001 ) and Explainability ( β = 0.415 , p < 0.001 ) showed significant positive correlations. However, the impacts of Accountability ( β = 0.053 , p > 0.05 ) and Transparency ( β = 0.074 , p > 0.05 ) were not significant. Additionally, for the influence on AGAI, Fairness ( β = 0.144 , p < 0.05 ), Explainability ( β = 0.188 , p < 0.05 ), and Transparency ( β = 0.171 , p < 0.01 ) were significantly positively correlated, whereas Accountability ( β = 0.032 , p > 0.05 ) had no significant impact.

4. Discussion

This study, grounded in the TAM and a multidimensional trust framework, offers a comprehensive examination of users’ psychological responses and behavioral intentions during the generative AI adoption process. The results show that Generative AI Abuse Anxiety undermines both emotional and cognitive responses to various trust subdimensions, including Trust in Websites, Human-like Trust, and System-like Trust. It is important to note that although GAIA is statistically negatively correlated with PU ( β = 0.069 , p < 0.1 ) and AGAI ( β = 0.082 , p < 0.1 ), these effect sizes are relatively small, warranting cautious interpretation. These marginal effects suggest that the practical impact of GAIA on users’ perceptions and acceptance is limited, and the findings should be regarded as preliminary or exploratory. Thus, the real-world significance of these results should not be overstated in theoretical or practical discussions.
In terms of conceptual clarity, transparency and accountability initially exhibited some factor overlap in the first round of EFA. Nevertheless, in the second round of EFA, in which the number of factors was fixed at thirteen, these subdimensions showed distinct factor loadings (all exceeding 0.70) with no significant cross-loadings. The results of the second-order factor analysis also confirmed their distinct roles within the System-like Trust dimension. These empirical findings are consistent with theoretical expectations: Transparency emphasizes the openness of algorithms and users’ understanding, while Accountability focuses on clearly defined responsibility and traceability. Taken together, the theoretical rationale and empirical results provide strong support for the conceptual and structural distinctiveness of these trust subdimensions.
Furthermore, this study analyzes the role of website trust, Human-like Trust, and System-like Trust in the Acceptance of Generative AI technology. The findings reveal that while subdimensions of trust significantly influence users’ technology acceptance and ITU, the specific mechanisms of influence differ. For instance, Trust in Websites has a particularly strong influence on PU, while Human-like Trust primarily enhances users’ acceptance of the technology through emotional factors.
Specifically, regarding PU, the overall impact of Trust in Websites is significantly larger than that of System -like and Human-like Trust. For most users, IT Quality and privacy protection are key factors in enhancing the PU of generative AI. In contrast, the roles played by System-like Trust and Human-like Trust are relatively limited. It is worth noting that the relationships between Accountability, Transparency, and PU were not significant. This result highlights certain limitations of System-like Trust. For most users, technical details or complex transparency information do not necessarily enhance their perception of usefulness. In reality, only a small group of professional machine learning practitioners are willing to interpret and evaluate the technical mechanisms or transparency reports of AI systems. For general users, too much transparency may lead to information overload, reducing user experience or even causing frustration [56,57]. In other words, users tend to prefer explanations that are concise, intuitive, and readily comprehensible. Moreover, AI system providers may formally acknowledge that users have the right to be informed. However, they often use ambiguous terminology to fulfill their informational obligations [58].
Regarding acceptance, Human-like Trust has the largest overall effect on the AGAI, followed by Trust in Websites. System-like Trust is slightly lower than Trust in Websites. This finding emphasizes the significance of the social characteristics of generative AI in users’ acceptance. Human-like Trust enhances users’ emotional dependence on and identification with the technology, aligning with prior studies that stress the role of emotional connection and perceived social interaction in technology adoption [14,35]. The effect of Accountability on Acceptance, however, did not reach statistical significance. One possible explanation is the lack of established accountability mechanisms for unethical behavior involving generative AI abuse. When users perceive a lack of assurance or enforceability in holding technology misuse accountable, their overall trust in the technology is limited. This emphasizes the need for effective regulatory mechanisms and ethical safeguards when developing trustworthy generative AI systems.

4.1. Theoretical Implications

This study provides an innovative optimization of the TAM in the context of generative AI. Unlike the traditional TAM, which primarily emphasizes PU and PEOU, this study, drawing on social exchange theory, replaces PEOU with trust and incorporates additional external variables. This theoretical adjustment offers a more suitable framework for applying the TAM to generative AI research, thereby enhancing its relevance and explanatory capacity in the context of emerging technologies.
The construct of GAIA proposed in this study is conceptually distinct from, and expands upon, related notions such as risk perception and technology anxiety. Technology anxiety typically refers to the tension, fear, and low self-efficacy that users experience when learning how to use or using new technologies [59]. Risk perception focuses on individuals’ subjective interpretations of potential dangers or uncertainties [60]. In the context of technology, this primarily concerns uncertainties or hazards associated with routine or expected use, such as worries about reliability, privacy breaches, or negative outcomes [61]. By contrast, GAIA captures users’ concerns about the potential for generative AI systems to be used in unethical, illegal, or harmful ways. This psychological variable reflects anxieties about the misuse of AI that extend beyond normal or intended use scenarios, thus surpassing the boundaries of traditional risk perception and technology anxiety. For example, it encompasses fears related to online fraud, misinformation, and privacy violations. Such risks and harms cannot be adequately explained by technological or operational risk alone. By theoretically distinguishing and empirically validating this construct, the present study provides a more precise theoretical lens for understanding the psychological mechanisms underpinning trust and technology adoption in the era of AI.
Furthermore, in response to Choung et al. [14]’s call for the development of a multidimensional approach to AI trust, this study extends the existing theoretical framework of technology trust. We propose and validate three core subdimensions of generative AI trust: Trust in Websites, Human-like Trust, and System-like Trust. This framework encompasses multiple key sources of technology trust. A key contribution of this study is the identification of Trust in Websites as a crucial component of generative AI trust, addressing a gap in the existing body of research. Additionally, we extend the application of interpersonal trust in technology, expanding this concept from automation and robotics to generative AI. Through this theoretical expansion, we broaden the multidimensional understanding of trust in technology acceptance.
The study further reveals perceptual differences among the dimensions of generative AI trust. The limitations of Transparency and Accountability highlight weak points in the design and application of generative AI. In sociology, trust is built on rational assessment and objective evidence [62]. In this context, when the source of trust cannot be attributed to the algorithm’s decision-making process, the evaluation methods or data must exhibit trustworthy model characteristics. One potential approach is to conduct proxy assessments through independent third parties. External verification could help eliminate users’ doubts about the technology’s reliability. Furthermore, we advocate for incorporating ethical reviews and regulatory mechanisms into the ongoing evolution of technology acceptance theory. The standardized application of generative AI depends not only on the system’s technological performance but also on ethical guarantees and a sense of social responsibility. Finally, we recommend implementing effective accountability mechanisms to ensure that feedback on abuse or improper decisions is swiftly addressed, thereby fostering a positive cycle of rapid feedback, accountability, and remediation.

4.2. Practical Implications

The abuse of generative AI often involves privacy breaches and data security issues. Typically, this occurs when users’ personal information is generated without authorization or personal data is used for training or content generation. Take the European Union’s General Data Protection Regulation (GDPR) as an example. If malicious behavior involves the unlawful processing of personal data, both data controllers and data processors (i.e., the owners or operators of AI systems) must be held accountable to the data subjects [63]. However, there remains some ambiguity in the current allocation of responsibility, particularly regarding the liability of AI users [64]. Under existing legal frameworks, responsibility is primarily assigned to AI system developers, manufacturers, and vendors, while the accountability of malicious users remains unclear. This is especially true in the application of high-risk generative AI, where the regulatory scope within legal systems remains a topic of dispute [58,65].
In this context, enhancing the legal certainty of accountability mechanisms is crucial for boosting public trust in generative AI. By examining how GAIA affects user behavior, this study provides valuable insights for policymakers and other stakeholders, particularly in clarifying responsibility allocation, mitigating technological misuse risks, and strengthening user trust. We recommend that platforms clearly inform users when content is generated by AI and provide accessible explanations about how and why decisions or recommendations are made. For example, in high-risk domains such as those of healthcare and finance, users should be explicitly notified as to whether relevant advice is generated by AI. In addition, regulatory authorities should clearly delineate responsibilities among AI system developers, service providers, and end users. Platforms should also maintain transparent usage logs and audit trails to facilitate prompt accountability and traceability in cases of malicious use. This research supports the sustainable development of generative AI technology and makes a tangible contribution toward building a more equitable, transparent, accountable, and explainable AI ecosystem.

4.3. Limitations and Future Directions

This study has several limitations. First, a snowball sampling method was used for data collection. While this approach can efficiently and quickly recruit participants, it tends to introduce bias towards certain fields, regions, or groups with specific attitudes. In particular, the combination of snowball sampling and monetary incentives further exacerbates such biases. For example, this approach may attract participants who have a stronger interest in or particular biases toward generative AI, while those who lack trust in generative AI may be less willing to participate. Similarly, individuals with higher levels of AI anxiety are more likely to refuse participation or avoid identifying themselves as part of this group. Consequently, our findings likely reflect more extreme viewpoints and do not fully capture the entire spectrum of user acceptance and anxiety perceptions regarding generative AI. This could limit the external validity and generalizability of our results.
Although we conducted measurement invariance tests across demographic subgroups such as gender, age, and educational level to verify the robustness of the model, this analysis cannot fully eliminate the risk of systematic selection bias caused by the sampling method. Regarding age grouping, we initially planned to use 40 years as the cutoff for subgroup analysis. However, there were only 49 respondents over the age of 40, which falls short of the recommended minimum threshold for robust multi-group analysis and measurement invariance testing (typically at least 50 participants per group). Therefore, to ensure the reliability of statistical analyses and the balance of sample sizes, we ultimately used 30 years as the cutoff, dividing respondents into two relatively balanced groups (age ≤ 30, n = 165; age > 30, n = 153).
It is important to note that the number of respondents over the age of 40 in this study was relatively small. This imbalance between groups may increase the standard error in subgroup analysis, reduce the stability and accuracy of the results, and further limit the applicability and generalizability of findings related to older user groups. We therefore recommend that future research increase the sample sizes of users over 40 to enhance the robustness of subgroup analysis.
To address the above limitations, we recommend that future studies adopt more diverse sampling strategies, such as stratified random sampling, to better cover various groups within the target population. In addition, future research could conduct robustness checks based on usage frequency and familiarity with specific platforms to further validate the applicability and consistency of the findings across different user groups. Cross-national or cross-regional comparative studies are also worthy of exploration, as they could further reveal both the differences and commonalities in Generative AI Abuse Anxiety, technology trust, and acceptance intentions among users from different cultural or social backgrounds. Such extensions would enhance the comprehensiveness and generalizability of research findings.
Second, trust is a dynamic process that may change as users become more familiar with the technology. This study focused on static user attitudes and behavioral predictions. Future research could consider incorporating a temporal dimension by tracking changes in users’ attitudes over extended periods of technology use. This will offer deeper insight into the evolving relationship between trust and technology acceptance.
Furthermore, this study tested the impact of GAIA and trust on user technology acceptance. Demographic variables, such as age and gender, may also have significant moderating effects across different user groups. Future research could incorporate these individual differences as moderating variables in the model to further enhance the theoretical generalizability of the study. In addition, this study adjusted the traditional TAM framework to fit the context of generative AI. While this adaptation improved the model’s contextual fit, it may limit the direct comparability of our findings with other TAM-based studies. We encourage future research to empirically examine the role of PEOU in different AI systems and application settings in order to further refine the applicability and theoretical boundaries of the TAM in the field of intelligent information systems.

Author Contributions

Conceptualization, Y.S.; validation, Y.S.; formal analysis, Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, Y.S.; supervision, H.T. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by the National Natural Science Foundation of China (Nos. 62402446), the Zhejiang Provincial Natural Science Foundation of China (Nos. LQ24F020011).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. According to Gachon University policy, it received an exemption from the Institutional Review Board.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data supporting this study’s findings are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The original scale was appropriately adapted and modified according to the research needs. The specific items of the survey questionnaire used in this study are shown in Table A1.
Table A1. Survey Scales.
Table A1. Survey Scales.
VariablesSurvey Items
Generative AI Abuse Anxiety
 Wang and Wang [53]
I am concerned that generative AI may be misused.
I worry about potential problems associated with generative AI.
I fear that generative AI could malfunction or spiral out of control.
Perceived Usefulness
 Choung et al. [14]
Using generative AI helps me complete tasks more quickly.
Generative AI improves my task performance.
Generative AI boosts my efficiency in completing tasks.
I find generative AI to be useful for my tasks.
Intention to Use
 Choung et al. [14]
I plan to keep using generative AI.
I anticipate continuing to use generative AI.
Using generative AI is something I would continue to do.
Generative AI Acceptance
 Choung et al. [14]
I have a positive attitude toward generative AI.
I find using generative AI enjoyable.
I believe using generative AI is a good choice.
I consider generative AI a smart way to accomplish tasks.
IT Quality
 Hsu et al. [16]
I find the generative AI on this website user-friendly.
I am satisfied with the services offered by the generative AI on this website.
Security and Privacy
 Hsu et al. [16]
This website has implemented security measures to protect users’ privacy and data when using generative AI.
I feel that using generative AI on this website is safe.
This website guarantees that my personal information is protected.
Ability
 Lankton et al. [12]
Generative AI is competent and effective in completing tasks.
Generative AI fulfills its role of content generation effectively.
Generative AI is a capable and proficient system for task execution.
Benevolence
 Choung et al. [14]
Generative AI cares about our needs and interests.
Generative AI aims to solve the problems faced by human users.
Generative AI strives to be helpful and does not act out of self-interest.
Integrity
 Choung et al. [14]
Generative AI is honest in human-computer interaction.
Generative AI honors its commitments and fulfills its promises.
Generative AI does not abuse the information and advantages it has over its users.
Fairness
 Shin [17]
I feel that the content generated by generative AI is fair.
I feel that generative AI is impartial and does not discriminate against anyone.
I trust that generative AI follows fair and unbiased procedures.
Accountability
 Shin [17]
I feel that there will be someone responsible for the negative impacts caused by generative AI.
I feel that the algorithms of generative AI will allow third-party inspection and review.
If generative AI is misused, I believe malicious users will be held accountable.
Transparency
 Shin [17]
I feel that the evaluation criteria and standards of generative AI’s algorithms are transparent.
I feel that any output generated by generative AI algorithms can be explained.
I feel that the evaluation criteria and standards of generative AI’s algorithms are understandable.
Explainability
 Shin [17]
I feel it is easy to understand how generative AI generates content.
I feel that the content generated by generative AI is explainable.
I feel that the explanations provided by generative AI can validate whether the content is correct.
Table A2, Table A3 and Table A4 report the detailed results of measurement invariance tests (MICOM procedure) across gender, age, and educational level. For each subgroup comparison, Step 2 (compositional invariance) and Step 3 (equality of means and variances) are reported. All constructs showed both partial and full measurement invariance, indicating robust and consistent measurement across all major subgroups.
Table A2. Measurement invariance results by gender (female vs. male).
Table A2. Measurement invariance results by gender (female vs. male).
ConstructsStep 2Partial MIStep 3aStep 3bFull MI
Original Correlation5.00%Original DifferencesConfidence IntervalOriginal DifferencesConfidence Interval
AGAI10.999Yes−0.014[−0.220, 0.221]−0.02[−0.271, 0.276]Yes/Yes
ABL0.9990.998Yes0.046[−0.214, 0.215]0.191[−0.281, 0.283]Yes/Yes
ACC10.995Yes0.028[−0.216, 0.225]0.09[−0.374, 0.392]Yes/Yes
BEN10.998Yes−0.033[−0.213, 0.223]−0.055[−0.255, 0.259]Yes/Yes
EXP10.997Yes−0.051[−0.217, 0.225]−0.005[−0.333, 0.331]Yes/Yes
FAI0.9990.999Yes−0.019[−0.216, 0.220]0.11[−0.357, 0.356]Yes/Yes
GAIA0.9980.995Yes0.116[−0.214, 0.214]−0.159[−0.308, 0.312]Yes/Yes
ITQ10.999Yes−0.134[−0.218, 0.228]0.114[−0.288, 0.291]Yes/Yes
INT0.9990.997Yes0.075[−0.214, 0.224]−0.119[−0.289, 0.301]Yes/Yes
ITU10.999Yes−0.014[−0.216, 0.218]−0.08[−0.335, 0.324]Yes/Yes
PU11Yes−0.147[−0.214, 0.225]0.06[−0.307, 0.315]Yes/Yes
SAP10.999Yes−0.033[−0.216, 0.227]−0.042[−0.290, 0.290]Yes/Yes
TRA0.9960.993Yes0.105[−0.225, 0.226]0.045[−0.371, 0.406]Yes/Yes
Notes: Partial MI = Partial Measurement Invariance; Full MI = Full Measurement Invariance. GAIA = Generative AI Abuse Anxiety; PU = Perceived Usefulness; AGAI = Acceptance of Generative AI; ITU = Intention to Use; ITQ = IT Quality; SAP = Security and Privacy; ABL = Ability; BEN = Benevolence; INT = Integrity; FAI = Fairness; ACC = Accountability; TRA = Transparency; EXP = Explainability.
Table A3. Measurement invariance results by age (Age ≤ 30 vs. Age > 30).
Table A3. Measurement invariance results by age (Age ≤ 30 vs. Age > 30).
ConstructsStep 2Partial MIStep 3aStep 3bFull MI
Original Correlation5.00%Original DifferencesConfidence IntervalOriginal DifferencesConfidence Interval
AGAI10.999Yes0.04[−0.217, 0.222]0.101[−0.267, 0.264]Yes/Yes
ABL10.999Yes0.015[−0.213, 0.223]0.146[−0.278, 0.278]Yes/Yes
ACC0.9990.995Yes0.064[−0.219, 0.216]−0.252[−0.384, 0.382]Yes/Yes
BEN10.998Yes−0.08[−0.220, 0.220]0.009[−0.254, 0.251]Yes/Yes
EXP10.996Yes−0.054[−0.216, 0.224]−0.122[−0.339, 0.349]Yes/Yes
FAI10.999Yes−0.054[−0.216, 0.223]0.013[−0.358, 0.349]Yes/Yes
GAIA0.9980.995Yes0.005[−0.221, 0.216]−0.07[−0.308, 0.316]Yes/Yes
ITQ10.999Yes0.152[−0.219, 0.224]0.104[−0.290, 0.278]Yes/Yes
INT0.9990.997Yes−0.034[−0.219, 0.216]−0.008[−0.298, 0.288]Yes/Yes
ITU10.999Yes0.001[−0.217, 0.216]0.021[−0.329, 0.320]Yes/Yes
PU11Yes−0.059[−0.219, 0.224]0.226[−0.311, 0.313]Yes/Yes
SAP0.9990.999Yes0.075[−0.219, 0.228]0.013[−0.284, 0.284]Yes/Yes
TRA10.993Yes−0.009[−0.220, 0.214]−0.094[−0.387, 0.395]Yes/Yes
Notes: Partial MI = Partial Measurement Invariance; Full MI = Full Measurement Invariance. GAIA = Generative AI Abuse Anxiety; PU = Perceived Usefulness; AGAI = Acceptance of Generative AI; ITU = Intention to Use; ITQ = IT Quality; SAP = Security and Privacy; ABL = Ability; BEN = Benevolence; INT = Integrity; FAI = Fairness; ACC = Accountability; TRA = Transparency; EXP = Explainability.
Table A4. Measurement invariance results by educational level (undergraduate and below vs. Graduate).
Table A4. Measurement invariance results by educational level (undergraduate and below vs. Graduate).
ConstructsStep 2Partial MIStep 3aStep 3bFull MI
Original Correlation5.00%Original DifferencesConfidence IntervalOriginal DifferencesConfidence Interval
AGAI10.999Yes0.073[−0.222, 0.221]−0.115[−0.277, 0.281]Yes/Yes
ABL0.9990.998Yes0.166[−0.221, 0.229]−0.136[−0.279, 0.300]Yes/Yes
ACC0.9970.994Yes0.195[−0.223, 0.219]−0.329[−0.375, 0.399]Yes/Yes
BEN0.9990.998Yes0.148[−0.221, 0.227]−0.082[−0.249, 0.263]Yes/Yes
EXP10.996Yes0.174[−0.215, 0.222]−0.125[−0.341, 0.347]Yes/Yes
FAI10.999Yes0.125[−0.228, 0.229]−0.063[−0.356, 0.365]Yes/Yes
GAIA0.9970.995Yes−0.076[−0.225, 0.220]−0.202[−0.315, 0.328]Yes/Yes
ITQ10.999Yes−0.142[−0.218, 0.220]0.097[−0.283, 0.305]Yes/Yes
INT0.9990.997Yes0.126[−0.220, 0.220]−0.17[−0.293, 0.298]Yes/Yes
ITU10.999Yes−0.017[−0.219, 0.220]−0.221[−0.316, 0.346]Yes/Yes
PU11Yes0.046[−0.219, 0.217]−0.131[−0.300, 0.322]Yes/Yes
SAP10.999Yes−0.108[−0.222, 0.227]−0.055[−0.300, 0.288]Yes/Yes
TRA0.9960.993Yes0.113[−0.224, 0.219]−0.129[−0.380, 0.405]Yes/Yes
Notes: Partial MI = Partial Measurement Invariance; Full MI = Full Measurement Invariance. GAIA = Generative AI Abuse Anxiety; PU = Perceived Usefulness; AGAI = Acceptance of Generative AI; ITU = Intention to Use; ITQ = IT Quality; SAP = Security and Privacy; ABL = Ability; BEN = Benevolence; INT = Integrity; FAI = Fairness; ACC = Accountability; TRA = Transparency; EXP = Explainability.
Table A5 presents the results of the second-order factor analysis for the three trust dimensions (Trust in Websites, Human-like Trust, and System-like Trust). All standardized loadings exceed the acceptable threshold, indicating that the structure of the three-dimensional trust framework is complete and consistent.
Table A6 reports the results of the mediation analysis examining whether the effect of IT Quality on user acceptance intention is mediated by System-like Trust and Human-like Trust.
Table A5. Second-order standardized regression weights.
Table A5. Second-order standardized regression weights.
Path Structural RelationshipsEstimateS.E.C.R.p
ITQ ← Trust in Websites0.668Regression weight of reference
SAP ← Trust in Websites0.8540.1606.936***
FAI ← System-like Trust0.837Regression weight of reference
ACC ← System-like Trust0.7150.0769.763***
TRA ← System-like Trust0.7660.0709.908***
EXP ← System-like Trust0.7610.0859.108***
ABL ← Human-like Trust0.729Regression weight of reference
BEN ← Human-like Trust0.7930.1138.534***
INT ← Human-like Trust0.8090.1178.597***
Notes: ITQ = IT Quality; SAP = Security and Privacy; ABL = Ability; BEN = Benevolence; INT = Integrity; FAI = Fairness; ACC = Accountability; TRA = Transparency; EXP = Explainability. ***: p < 0.001 .
Table A6. Standardized regression results.
Table A6. Standardized regression results.
PathsCI [2.5%, 97.5%] β p
ITQ → System-like Trust[0.190, 0.413]0.299***
System-like Trust → AGAI[0.220, 0.393]0.306***
ITQ → System-like Trust → AGAI (Indirect effect)[0.053, 0.140]0.092***
ITQ → Human-like Trust[0.144, 0.388]0.264***
Human-like Trust → AGAI[0.364, 0.538]0.452***
ITQ → Human-like Trust → AGAI (Indirect effect)[0.062, 0.187]0.119***
Notes: AGAI = Acceptance of Generative AI; ITQ = IT Quality. ***: p < 0.001 .

References

  1. Kumar, P.C.; Cotter, K.; Cabrera, L.Y. Taking Responsibility for Meaning and Mattering: An Agential Realist Approach to Generative AI and Literacy. Read. Res. Q. 2024, 59, 570–578. [Google Scholar] [CrossRef]
  2. Kang, H.; Lou, C. AI Agency vs. Human Agency: Understanding Human–AI Interactions on TikTok and Their Implications for User Engagement. J. -Comput.-Mediat. Commun. 2022, 27, zmac014. [Google Scholar] [CrossRef]
  3. Stokel-Walker, C.; Van Noorden, R. What ChatGPT and Generative AI Mean for Science. Nature 2023, 614, 214–216. [Google Scholar] [CrossRef] [PubMed]
  4. Omrani, N.; Rivieccio, G.; Fiore, U.; Schiavone, F.; Agreda, S.G. To Trust or Not to Trust? An Assessment of Trust in AI-Based Systems: Concerns, Ethics and Contexts. Technol. Forecast. Soc. Change 2022, 181, 121763. [Google Scholar] [CrossRef]
  5. Liang, S.; Shi, C. Understanding the Role of Privacy Issues in AIoT Device Adoption within Smart Homes: An Integrated Model of Privacy Calculus and Technology Acceptance. Aslib J. Inf. Manag. 2025. ahead-of-print. [Google Scholar] [CrossRef]
  6. Tang, Z.; Goh, D.H.L.; Lee, C.S.; Yang, Y. Understanding Strategies Employed by Seniors in Identifying Deepfakes. Aslib J. Inf. Manag. 2024. ahead-of-print. [Google Scholar] [CrossRef]
  7. Malik, A.; Kuribayashi, M.; Abdullahi, S.M.; Khan, A.N. DeepFake Detection for Human Face Images and Videos: A Survey. IEEE Access 2022, 10, 18757–18775. [Google Scholar] [CrossRef]
  8. Shin, D.; Park, Y.J. Role of Fairness, Accountability, and Transparency in Algorithmic Affordance. Comput. Hum. Behav. 2019, 98, 277–284. [Google Scholar] [CrossRef]
  9. Johnson, D.G.; Verdicchio, M. AI Anxiety. J. Assoc. Inf. Sci. Technol. 2017, 68, 2267–2270. [Google Scholar] [CrossRef]
  10. Jacovi, A.; Marasović, A.; Miller, T.; Goldberg, Y. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada, 3–10 March 2021; pp. 624–635. [Google Scholar] [CrossRef]
  11. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in Online Shopping: An Integrated Model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  12. Lankton, N.; McKnight, D.; Tripp, J. Technology, Humanness, and Trust: Rethinking Trust in Technology. J. Assoc. Inf. Syst. 2015, 16, 880–918. [Google Scholar] [CrossRef]
  13. Glikson, E.; Woolley, A.W. Human Trust in Artificial Intelligence: Review of Empirical Research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  14. Choung, H.; David, P.; Ross, A. Trust in AI and Its Role in the Acceptance of AI Technologies. Int. J.-Hum.-Comput. Interact. 2023, 39, 1727–1739. [Google Scholar] [CrossRef]
  15. Botsman, R. Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart; Perseus Books: New York, NY, USA, 2017. [Google Scholar]
  16. Hsu, M.H.; Chuang, L.W.; Hsu, C.S. Understanding Online Shopping Intention: The Roles of Four Types of Trust and Their Antecedents. Internet Res. 2014, 24, 332–352. [Google Scholar] [CrossRef]
  17. Shin, D. The Effects of Explainability and Causability on Perception, Trust, and Acceptance: Implications for Explainable AI. Int. J. Hum.-Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  18. Beckers, J.J.; Schmidt, H.G. The Structure of Computer Anxiety: A Six-Factor Model. Comput. Hum. Behav. 2001, 17, 35–49. [Google Scholar] [CrossRef]
  19. Feuerriegel, S.; Hartmann, J.; Janiesch, C.; Zschech, P. Generative AI. Bus. Inf. Syst. Eng. 2024, 66, 111–126. [Google Scholar] [CrossRef]
  20. Alasadi, E.A.; Baiz, C.R. Generative AI in Education and Research: Opportunities, Concerns, and Solutions. J. Chem. Educ. 2023, 100, 2965–2971. [Google Scholar] [CrossRef]
  21. Dolata, M.; Feuerriegel, S.; Schwabe, G. A Sociotechnical View of Algorithmic Fairness. Inf. Syst. J. 2022, 32, 754–818. [Google Scholar] [CrossRef]
  22. Babic, B.; Gerke, S.; Evgeniou, T.; Cohen, I.G. Beware Explanations from AI in Health Care. Science 2021, 373, 284–286. [Google Scholar] [CrossRef]
  23. Schiavo, G.; Businaro, S.; Zancanaro, M. Comprehension, Apprehension, and Acceptance: Understanding the Influence of Literacy and Anxiety on Acceptance of Artificial Intelligence. Technol. Soc. 2024, 77, 102537. [Google Scholar] [CrossRef]
  24. Peres, R.; Schreier, M.; Schweidel, D.; Sorescu, A. On ChatGPT and beyond: How Generative Artificial Intelligence May Affect Research, Teaching, and Practice. Int. J. Res. Mark. 2023, 40, 269–275. [Google Scholar] [CrossRef]
  25. Jovanović, M.; Campbell, M. Generative Artificial Intelligence: Trends and Prospects. Computer 2022, 55, 107–112. [Google Scholar] [CrossRef]
  26. Kreps, S.; McCain, R.M.; Brundage, M. All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation. J. Exp. Political Sci. 2022, 9, 104–117. [Google Scholar] [CrossRef]
  27. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  28. King, W.R.; He, J. A Meta-Analysis of the Technology Acceptance Model. Inf. Manag. 2006, 43, 740–755. [Google Scholar] [CrossRef]
  29. Rafique, H.; Almagrabi, A.O.; Shamim, A.; Anwar, F.; Bashir, A.K. Investigating the Acceptance of Mobile Library Applications with an Extended Technology Acceptance Model (TAM). Comput. Educ. 2020, 145, 103732. [Google Scholar] [CrossRef]
  30. Rauniar, R.; Rawski, G.; Yang, J.; Johnson, B. Technology Acceptance Model (TAM) and Social Media Usage: An Empirical Study on Facebook. J. Enterp. Inf. Manag. 2014, 27, 6–30. [Google Scholar] [CrossRef]
  31. Scherer, R.; Siddiq, F.; Tondeur, J. The Technology Acceptance Model (TAM): A Meta-Analytic Structural Equation Modeling Approach to Explaining Teachers’ Adoption of Digital Technology in Education. Comput. Educ. 2019, 128, 13–35. [Google Scholar] [CrossRef]
  32. Wu, B.; Chen, X. Continuance Intention to Use MOOCs: Integrating the Technology Acceptance Model (TAM) and Task Technology Fit (TTF) Model. Comput. Hum. Behav. 2017, 67, 221–232. [Google Scholar] [CrossRef]
  33. Wallace, L.G.; Sheetz, S.D. The Adoption of Software Measures: A Technology Acceptance Model (TAM) Perspective. Inf. Manag. 2014, 51, 249–259. [Google Scholar] [CrossRef]
  34. Wu, I.L.; Chen, J.L. An Extension of Trust and TAM Model with TPB in the Initial Adoption of On-Line Tax: An Empirical Study. Int. J.-Hum.-Comput. Stud. 2005, 62, 784–808. [Google Scholar] [CrossRef]
  35. Califf, C.B.; Brooks, S.; Longstreet, P. Human-like and System-like Trust in the Sharing Economy: The Role of Context and Humanness. Technol. Forecast. Soc. Change 2020, 154, 119968. [Google Scholar] [CrossRef]
  36. Lam, T. Continuous Use of AI Technology: The Roles of Trust and Satisfaction. Aslib J. Inf. Manag. 2025. ahead-of-print. [Google Scholar] [CrossRef]
  37. Habbal, A.; Ali, M.K.; Abuzaraida, M.A. Artificial Intelligence Trust, Risk and Security Management (AI TRiSM): Frameworks, Applications, Challenges and Future Research Directions. Expert Syst. Appl. 2024, 240, 122442. [Google Scholar] [CrossRef]
  38. Vorm, E.S.; Combs, D.J.Y. Integrating Transparency, Trust, and Acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM). Int. J. Hum.-Comput. Interact. 2022, 38, 1828–1845. [Google Scholar] [CrossRef]
  39. Al-Adwan, A.S.; Li, N.; Al-Adwan, A.; Abbasi, G.A.; Albelbisi, N.A.; Habibi, A. Extending the Technology Acceptance Model (TAM) to Predict University Students’ Intentions to Use Metaverse-Based Learning Platforms. Educ. Inf. Technol. 2023, 28, 15381–15413. [Google Scholar] [CrossRef]
  40. Rousseau, D.M.; Sitkin, S.B.; Burt, R.S.; Camerer, C. Not So Different After All: A Cross-Discipline View Of Trust. Acad. Manag. Rev. 1998, 23, 393–404. [Google Scholar] [CrossRef]
  41. Keohane, R.O. Reciprocity in International Relations. Int. Organ. 1986, 40, 1–27. [Google Scholar] [CrossRef]
  42. Hancock, P.A.; Kessler, T.T.; Kaplan, A.D.; Brill, J.C.; Szalma, J.L. Evolving Trust in Robots: Specification Through Sequential and Comparative Meta-Analyses. Hum. Factors 2021, 63, 1196–1229. [Google Scholar] [CrossRef]
  43. Došilović, F.K.; Brčić, M.; Hlupić, N. Explainable Artificial Intelligence: A Survey. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 21–25 May 2018; pp. 0210–0215. [Google Scholar] [CrossRef]
  44. Molm, L.D.; Takahashi, N.; Peterson, G. Risk and Trust in Social Exchange: An Experimental Test of a Classical Proposition. Am. J. Sociol. 2000, 105, 1396–1427. [Google Scholar] [CrossRef]
  45. Zhou, T.; Ma, X. Examining Generative AI User Continuance Intention Based on the SOR Model. Aslib J. Inf. Manag. 2025. ahead-of-print. [Google Scholar] [CrossRef]
  46. Memarian, B.; Doleck, T. Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and Higher Education: A Systematic Review. Comput. Educ. Artif. Intell. 2023, 5, 100152. [Google Scholar] [CrossRef]
  47. Shin, D. User Perceptions of Algorithmic Decisions in the Personalized AI System:Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability. J. Broadcast. Electron. Media 2020, 64, 541–565. [Google Scholar] [CrossRef]
  48. Liang, L.; Sun, Y.; Yang, B. Algorithm Characteristics, Perceived Credibility and Verification of ChatGPT-Generated Content: A Moderated Nonlinear Model. Aslib J. Inf. Manag. 2025. ahead-of-print. [Google Scholar] [CrossRef]
  49. Al-Debei, M.M.; Akroush, M.N.; Ashouri, M.I. Consumer Attitudes towards Online Shopping: The Effects of Trust, Perceived Benefits, and Perceived Web Quality. Internet Res. 2015, 25, 707–733. [Google Scholar] [CrossRef]
  50. Amin, M.; Ryu, K.; Cobanoglu, C.; Nizam, A. Determinants of Online Hotel Booking Intentions: Website Quality, Social Presence, Affective Commitment, and e-Trust. J. Hosp. Mark. Manag. 2021, 30, 845–870. [Google Scholar] [CrossRef]
  51. Seckler, M.; Heinz, S.; Forde, S.; Tuch, A.N.; Opwis, K. Trust and Distrust on the Web: User Experiences and Website Characteristics. Comput. Hum. Behav. 2015, 45, 39–50. [Google Scholar] [CrossRef]
  52. Jeon, M.M.; Jeong, M. Customers’ Perceived Website Service Quality and Its Effects on e-Loyalty. Int. J. Contemp. Hosp. Manag. 2017, 29, 438–457. [Google Scholar] [CrossRef]
  53. Wang, Y.Y.; Wang, Y.S. Development and Validation of an Artificial Intelligence Anxiety Scale: An Initial Application in Predicting Motivated Learning Behavior. Interact. Learn. Environ. 2022, 30, 619–634. [Google Scholar] [CrossRef]
  54. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to Use and How to Report the Results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  55. Henseler, J.; Hubona, G.; Ray, P.A. Using PLS Path Modeling in New Technology Research: Updated Guidelines. Ind. Manag. Data Syst. 2016, 116, 2–20. [Google Scholar] [CrossRef]
  56. Rai, A. Explainable AI: From Black Box to Glass Box. J. Acad. Mark. Sci. 2020, 48, 137–141. [Google Scholar] [CrossRef]
  57. Dehling, T.; Sunyaev, A. A Design Theory for Transparency of Information Privacy Practices. Inf. Syst. Res. 2024, 35, iii–x. [Google Scholar] [CrossRef]
  58. Ye, X.; Yan, Y.; Li, J.; Jiang, B. Privacy and Personal Data Risk Governance for Generative Artificial Intelligence: A Chinese Perspective. Telecommun. Policy 2024, 48, 102851. [Google Scholar] [CrossRef]
  59. Troisi, O.; Fenza, G.; Grimaldi, M.; Loia, F. Covid-19 sentiments in smart cities: The role of technology anxiety before and during the pandemic. Comput. Hum. Behav. 2022, 126, 106986. [Google Scholar] [CrossRef]
  60. Guo, Y.; An, S.; Comes, T. From Warning Messages to Preparedness Behavior: The Role of Risk Perception and Information Interaction in the Covid-19 Pandemic. Int. J. Disaster Risk Reduct. 2022, 73, 102871. [Google Scholar] [CrossRef]
  61. Choi, J.; Yoo, D. The Impacts of Self-Construal and Perceived Risk on Technology Readiness. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 1584–1597. [Google Scholar] [CrossRef]
  62. Legood, A.; van der Werff, L.; Lee, A.; den Hartog, D.; van Knippenberg, D. A Critical Review of the Conceptualization, Operationalization, and Empirical Literature on Cognition-Based and Affect-Based Trust. J. Manag. Stud. 2023, 60, 495–537. [Google Scholar] [CrossRef]
  63. Lazcoz, G.; de Hert, P. Humans in the GDPR and AIA Governance of Automated and Algorithmic Systems. Essential Pre-Requisites against Abdicating Responsibilities. Comput. Law Secur. Rev. 2023, 50, 105833. [Google Scholar] [CrossRef]
  64. Cobbe, J.; Singh, J. Artificial Intelligence as a Service: Legal Responsibilities, Liabilities, and Policy Challenges. Comput. Law Secur. Rev. 2021, 42, 105573. [Google Scholar] [CrossRef]
  65. Dong, H.; Chen, J. Meta-Regulation: An Ideal Alternative to the Primary Responsibility as the Regulatory Model of Generative AI in China. Comput. Law Secur. Rev. 2024, 54, 106016. [Google Scholar] [CrossRef]
Figure 1. Theoretical model.
Figure 1. Theoretical model.
Jtaer 20 00215 g001
Figure 2. Structural model results. Trust in Websites, Human-like Trust, and System-like Trust act as predictors of Perceived Usefulness (PU), the Acceptance of Generative AI (AGAI), and the Intention to Use (ITU) it. Generative AI Abuse Anxiety (GAIA) as an exogenous factor (*: p < 0.05 ; **: p < 0.01 , ***: p < 0.001 ).
Figure 2. Structural model results. Trust in Websites, Human-like Trust, and System-like Trust act as predictors of Perceived Usefulness (PU), the Acceptance of Generative AI (AGAI), and the Intention to Use (ITU) it. Generative AI Abuse Anxiety (GAIA) as an exogenous factor (*: p < 0.05 ; **: p < 0.01 , ***: p < 0.001 ).
Jtaer 20 00215 g002
Figure 3. Structural model results for Trust in Websites regarding generative AI. IT Quality (ITQ) and Security and Privacy (SAP), as subdimensions of Trust in Websites, act as predictors of Perceived Usefulness (PU) and Acceptance of Generative AI (AGAI). Generative AI Abuse Anxiety (GAIA) acts as an exogenous factor (**: p < 0.01 , ***: p < 0.001 ).
Figure 3. Structural model results for Trust in Websites regarding generative AI. IT Quality (ITQ) and Security and Privacy (SAP), as subdimensions of Trust in Websites, act as predictors of Perceived Usefulness (PU) and Acceptance of Generative AI (AGAI). Generative AI Abuse Anxiety (GAIA) acts as an exogenous factor (**: p < 0.01 , ***: p < 0.001 ).
Jtaer 20 00215 g003
Figure 4. Structural model results for Human-like Trust regarding generative AI. Ability (ABL), Benevolence (BEN), and Integrity (INT), as subdimensions of Human-like Trust, act as predictors of Perceived Usefulness (PU) and Acceptance of Generative AI (AGAI). Generative AI Abuse Anxiety (GAIA) acts as an exogenous factor (*: p < 0.05 ; **: p < 0.01 , ***: p < 0.001 ).
Figure 4. Structural model results for Human-like Trust regarding generative AI. Ability (ABL), Benevolence (BEN), and Integrity (INT), as subdimensions of Human-like Trust, act as predictors of Perceived Usefulness (PU) and Acceptance of Generative AI (AGAI). Generative AI Abuse Anxiety (GAIA) acts as an exogenous factor (*: p < 0.05 ; **: p < 0.01 , ***: p < 0.001 ).
Jtaer 20 00215 g004
Figure 5. Structural model results for System-like Trust regarding generative AI. Fairness (FAI), Accountability (ACC), Transparency (TRA), and Explainability (EXP), as subdimensions of System-like Trust, act as predictors of Perceived Usefulness (PU) and Acceptance of Generative AI (AGAI). Generative AI Abuse Anxiety (GAIA) as an exogenous factor (*: p < 0.05 ; **: p < 0.01 , ***: p < 0.001 ).
Figure 5. Structural model results for System-like Trust regarding generative AI. Fairness (FAI), Accountability (ACC), Transparency (TRA), and Explainability (EXP), as subdimensions of System-like Trust, act as predictors of Perceived Usefulness (PU) and Acceptance of Generative AI (AGAI). Generative AI Abuse Anxiety (GAIA) as an exogenous factor (*: p < 0.05 ; **: p < 0.01 , ***: p < 0.001 ).
Jtaer 20 00215 g005
Table 1. Characteristics of participants.
Table 1. Characteristics of participants.
CharacteristicsOptionFrequencyPercentage (%)
GenderMale16953.1%
Female14946.9%
Age<235818.2%
23–3010733.6%
31–4010432.7%
41–503912.3%
>50103.1%
EducationAssociate degree and below4112.9%
Undergraduate14244.7%
Graduate13542.6%
Usage FrequencyOnce or more times daily3912.3%
Once a week9931.1%
Once every two weeks6721.1%
Once a month11335.5%
Table 2. Reliability and validity.
Table 2. Reliability and validity.
VariableItemsFactor LoadingAVECRCronbach Alpha
Generative AI Abuse Anxiety (GAIA)GAIA10.9070.7910.9190.868
GAIA20.926
GAIA30.832
Perceived Usefulness (PU)PU10.8920.7790.9340.906
PU20.899
PU30.857
PU40.883
Acceptance of Generative AI (AGAI)AGAI10.850.7220.9120.871
AGAI20.829
AGAI30.856
AGAI40.863
Intention to Use (ITU)ITU10.9210.8380.9390.903
ITU20.916
ITU30.909
IT Quality (ITQ)ITQ10.9360.8820.9380.867
ITQ20.942
Security and Privacy (SAP)SAP10.9020.7950.9210.871
SAP20.905
SAP30.867
Ability (ABL)ABL10.9070.7990.9230.874
ABL20.893
ABL30.881
Benevolence (BEN)BEN10.8680.7740.9110.853
BEN20.913
BEN30.856
Integrity (INT)INT10.850.7310.8910.816
INT20.865
INT30.851
Fairness (FAI)FAI10.8740.8180.9310.888
FAI20.923
FAI30.915
Accountability (ACC)ACC10.8910.790.9190.867
ACC20.885
ACC30.891
Transparency (TRA)TRA10.8860.7610.9050.844
TRA20.887
TRA30.843
Explainability (EXP)EXP10.8110.6970.8730.783
EXP20.855
EXP30.838
Notes: GAIA = Generative AI Abuse Anxiety; PU = Perceived Usefulness; AGAI = Acceptance of Generative AI; ITU = Intention to Use; ITQ = IT Quality; SAP = Security and Privacy; ABL = Ability; BEN = Benevolence; INT = Integrity; FAI = Fairness; ACC = Accountability; TRA = Transparency; EXP = Explainability.
Table 3. Correlation matrix.
Table 3. Correlation matrix.
12345678910111213
1. ABL0.894
2. AGAI0.5380.850
3. ACC0.1020.3210.889
4. BEN0.4740.6180.2120.880
5. EXP0.4340.5940.3570.4640.835
6. FAI0.2890.5620.4960.4040.5440.904
7. GAIA−0.323−0.413−0.244−0.282−0.346−0.3010.889
8. ITQ0.2470.5500.1570.3000.3940.316−0.2150.939
9. INT0.5330.5910.1470.5350.4390.355−0.2390.2510.855
10. ITU0.4340.6400.3770.4300.6180.665−0.3600.4550.3890.915
11. PU0.4380.6890.2510.4580.6020.547−0.3550.6140.4260.6900.883
12. SAP0.3860.5810.1150.4230.4290.373−0.2480.4980.3630.4570.6390.892
13. TRA0.1930.3900.6660.2590.3990.523−0.2380.1900.2050.4760.2700.1120.872
Notes: The bold diagonal numbers are the square roots of the variances extracted. GAIA = Generative AI Abuse Anxiety; PU = Perceived Usefulness; AGAI = Acceptance of Generative AI; ITU = Intention to Use; ITQ = IT Quality; SAP = Security and Privacy; ABL = Ability; BEN = Benevolence; INT = Integrity; FAI = Fairness; ACC = Accountability; TRA = Transparency; EXP = Explainability.
Table 4. Heterotrait–Monotrait ratio of correlations result.
Table 4. Heterotrait–Monotrait ratio of correlations result.
12345678910111213
1. ABL
2. AGAI0.616
3. ACC0.1170.369
4. BEN0.5460.7150.246
5. EXP0.5230.7170.4280.565
6. FAI0.3270.6360.5620.4610.645
7. GAIA0.3670.4710.2780.3220.4120.336
8. ITQ0.2840.6320.1790.3500.4760.3610.238
9. INT0.6300.7010.1750.6390.5500.4160.2800.297
10. ITU0.4880.7210.4270.4880.7290.7420.4050.5130.453
11. PU0.4920.7750.2820.5210.7120.6100.3940.6930.4950.763
12. SAP0.4420.6660.1330.4910.5170.4240.2830.5720.4310.5140.718
13. TRA0.2240.4530.7790.3070.4890.6060.2720.2220.2490.5460.3080.129
Notes: GAIA = Generative AI Abuse Anxiety; PU = Perceived Usefulness; AGAI = Acceptance of Generative AI; ITU = Intention to Use; ITQ = IT Quality; SAP = Security and Privacy; ABL = Ability; BEN = Benevolence; INT = Integrity; FAI = Fairness; ACC = Accountability; TRA = Transparency; EXP = Explainability.
Table 5. Standardized regression results.
Table 5. Standardized regression results.
PathsCI-[2.5%, 97.5%]HypothesesSupported?
GAIA → PU[−0.135, 0.000]H1Yes
PU → AGAI[0.031, 0.251]H2Yes
GAIA → AGAI[−0.145, −0.015]H3Yes
GAIA → Trust in Websites[−0.376, −0.166]
GAIA → Human−Like Trust[−0.445, −0.240]
GAIA → System−Like Trust[−0.466, −0.274]
Trust in Websites → PU[0.423, 0.648]
Human−Like Trust → PU[0.029, 0.224]
System−Like Trust → PU[0.172, 0.354]
Trust in Websites → AGAI[0.152, 0.382]
Human−Like Trust → AGAI[0.281, 0.457]
System−Like Trust → AGAI[0.140, 0.301]
PU → ITU[0.359, 0.581]H13Yes
AGAI → ITU[0.197, 0.430]H14Yes
Notes: GAIA = Generative AI Abuse Anxiety; PU = Perceived Usefulness; AGAI = Acceptance of Generative AI; ITU = Intention to Use.
Table 6. Standardized regression results for Trust in Websites regarding generative AI.
Table 6. Standardized regression results for Trust in Websites regarding generative AI.
PathsCI [2.5%, 97.5%]HypothesesSupported?
GAIA → ITQ[−0.334, −0.098]H4aYes
GAIA → SAP[−0.363, −0.136]H4bYes
ITQ → PU[0.268, 0.485]H7aYes
SAP → PU[0.277, 0.536]H7bYes
ITQ → AGAI[0.057, 0.305]H10aYes
SAP → AGAI[0.058, 0.355]H10bYes
Notes: GAIA = Generative AI Abuse Anxiety; PU = Perceived Usefulness; AGAI = Acceptance of Generative AI; ITQ = IT Quality; SAP = Security and Privacy.
Table 7. Standardized regression results for Human-like Trust regarding generative AI.
Table 7. Standardized regression results for Human-like Trust regarding generative AI.
PathsCI [2.5%, 97.5%]HypothesesSupported?
GAIA → ABL[−0.426, −0.216]H5aYes
GAIA → BEN[−0.388, −0.170]H5bYes
GAIA → INT[−0.345, −0.129]H5cYes
ABL → PU[0.052, 0.308]H8aYes
BEN → PU[0.108, 0.360]H8bYes
INT → PU[0.033, 0.285]H8cYes
ABL → AGAI[0.011, 0.179]H11aYes
BEN → AGAI[0.142, 0.332]H11bYes
INT → AGAI[0.118, 0.306]H11cYes
Notes: GAIA = Generative AI Abuse Anxiety; PU = Perceived Usefulness; AGAI = Acceptance of Generative AI; ABL = Ability; BEN = Benevolence; INT = Integrity.
Table 8. Standardized regression results for System-like Trust regarding generative AI.
Table 8. Standardized regression results for System-like Trust regarding generative AI.
PathsCI [2.5%, 97.5%]HypothesesSupported?
GAIA → FAI[−0.403, −0.201]H6aYes
GAIA → ACC[−0.343, −0.145]H6bYes
GAIA → TRA[−0.348, −0.129]H6cYes
GAIA → EXP[−0.439, −0.253]H6dYes
FAI → PU[0.219, 0.465]H9aYes
ACC → PU[−0.186, 0.082]H9bNo
TRA → PU[−0.187, 0.048]H9cNo
EXP → PU[0.308, 0.514]H9dYes
FAI → AGAI[0.034, 0.250]H12aYes
ACC → AGAI[−0.133, 0.072]H12bNo
TRA → AGAI[0.009, 0.231]H12cYes
EXP → AGAI[0.058, 0.280]H12dYes
Notes: GAIA = Generative AI Abuse Anxiety; PU = Perceived Usefulness; AGAI = Acceptance of Generative AI; FAI = Fairness; ACC = Accountability; TRA = Transparency; EXP = Explainability.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, Y.; Tan, H. When Generative AI Meets Abuse: What Are You Anxious About? J. Theor. Appl. Electron. Commer. Res. 2025, 20, 215. https://doi.org/10.3390/jtaer20030215

AMA Style

Song Y, Tan H. When Generative AI Meets Abuse: What Are You Anxious About? Journal of Theoretical and Applied Electronic Commerce Research. 2025; 20(3):215. https://doi.org/10.3390/jtaer20030215

Chicago/Turabian Style

Song, Yuanzhao, and Haowen Tan. 2025. "When Generative AI Meets Abuse: What Are You Anxious About?" Journal of Theoretical and Applied Electronic Commerce Research 20, no. 3: 215. https://doi.org/10.3390/jtaer20030215

APA Style

Song, Y., & Tan, H. (2025). When Generative AI Meets Abuse: What Are You Anxious About? Journal of Theoretical and Applied Electronic Commerce Research, 20(3), 215. https://doi.org/10.3390/jtaer20030215

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop