Next Article in Journal
Application of Generative AI in Financial Risk Prediction: Enhancing Model Accuracy and Interpretability
Previous Article in Journal
Dynamic Energy-Aware Anchor Optimization for Contact-Based Indoor Localization in MANETs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Factors Affecting Human-Generated AI Collaboration: Trust and Perceived Usefulness as Mediators

Department of Business Administration, Mokpo National University, Muan-gun 534-729, Republic of Korea
*
Author to whom correspondence should be addressed.
Information 2025, 16(10), 856; https://doi.org/10.3390/info16100856
Submission received: 9 September 2025 / Revised: 1 October 2025 / Accepted: 2 October 2025 / Published: 3 October 2025
(This article belongs to the Section Artificial Intelligence)

Abstract

With the development of generative artificial intelligence (AI) technology, collaboration between humans and AI is expected to improve productivity, efficiency, and safety in various industries. This study presents and empirically analyzes the factors affecting collaboration between humans and AI. This study presents and empirically analyzes a research model based on the antecedents of calculative-based, cognition-based, knowledge-based, and social influence-based trust. A total of 305 valid data points were collected through questionnaires completed by experts, office workers, and graduate students, and were analyzed using structural equation modeling. The analysis showed that all antecedents except familiarity, an antecedent of knowledge-based trust, significantly affected trust.

1. Introduction

The concept of artificial intelligence (AI) first emerged in the mid-20th century alongside advancements in computer science. Attempts to mechanically replicate human abilities saw two significant leaps in the 1950s and 1980s while simultaneously revealing their limitations. The history of AI has been marked by cycles of expectation and failure, and people gradually confirmed the possibility of machines replacing humans; however, people believed that artistic and creative tasks would remain uniquely human domains, such as writing poetry, painting pictures, and composing music. This view persisted until the early 21st century. Even when Deep Blue defeated the chess champion in 1997 and AlphaGo beat Lee Sedol in 2016, people still believed that creative work was exclusively human territory; however, ChatGPT introduced in November 2022 and powered by GPT-3.5 emerged, proving its ability to generate high-quality content indistinguishable from human creations. Subsequently, contrary to human expectations, new AI technologies have spread across all areas of artistic creation, including writing, painting, music composition, and video production. AI that generates various forms of creative content, such as text, images, and music, using big data and deep learning technologies, like ChatGPT, is called “generative AI” [1,2]. Generative AI made a spectacular debut by producing results that exceeded people’s expectations; however, it had several critical issues in practical use [3,4].
The first issue is bias [5]—generative AI learns from large datasets to create new content; thus, the AI’s learning process reflects the political, social, and cultural biases inherent in the data. For instance, generative AI trained on biased data may produce problematic expressions about certain genders, races, or cultures, reinforcing negative stereotypes about specific groups. Another issue is a phenomenon called “hallucination” [6]. Generative AI focuses on creating outputs based on the learned data. When faced with situations requiring nonexistent information, AI generates content that diverges from reality. This divergence (or hallucination) can lead to AI producing baseless statements or distorted images that do not exist. Measures such as managing data sources, refining uncertain data, enhancing human feedback through the reinforcement learning from human feedback technique, and employing algorithms that include meticulous review of sensitive information such as gender, race, and age to reduce bias were implemented to address the aforementioned initial issues [7]. Simultaneously, technological advancements increased system parameters, significantly reducing problems like bias and hallucination. Consequently, a new version of the generative AI was released.
With the emergence of new versions, practical and psychological barriers to actual use have been resolved; thus, users’ attitudes toward generative AI have become increasingly positive [8]. This positive change in perception has created a basis for using AI in many industries, bringing the topic of collaboration between humans and generative AI to the forefront. Generative AI has prompted significant changes in traditional work methods, and the need for research on collaboration with generative AI is increasing.
Despite the rapid proliferation of generative AI tools such as ChatGPT, Midjourney, and DALL·E, empirical research on user behavior, perceptions, and acceptance factors remains limited. Traditional technology acceptance models, such as TAM and UTAUT, primarily focus on organizational or general information system contexts, making them insufficient to fully explain the creative, exploratory, and experimental nature of individual generative AI usage. In conventional systems, functional factors such as perceived usefulness and ease of use have been emphasized. In contrast, the generative AI environment highlights factors that were not central in previous studies, such as understandability and reliability in outcomes. This shift calls for an integrated analysis of a broader concept of trust that encompasses both the process and the results. Therefore, research is needed to better understand user acceptance behavior and, further, the intention to collaborate with generative AI—centered on the concept of trust. However, systematic research on collaboration intention with generative AI is lacking; therefore, this paper explores the main factors influencing the intention to collaborate with generative AI. In particular, this study seeks to identify the antecedents of trust in generative AI and determine how trust and usefulness affect the intention to collaborate with generative AI. Through this approach, we aim to provide a theoretical basis for efficiently designing collaborative environments with generative AI.

2. Theoretical Background

2.1. Generative AI

Generative AI is a branch of AI [1,2] that trains generative models on existing datasets, using these models to generate new data similar to existing data. Generative AI uses generative modeling based on deep learning technology to automatically create various forms of content, including images, text, audio, and video [9]. To this end, generative AI mainly utilizes deep learning technologies such as the generative adversarial network (GAN) and the generative pre-trained transformer (GPT).
GAN is a model in which two neural networks, a generator and a discriminator, compete with each other to learn; this AI model generates more realistic data through the interaction of these two neural networks that generate new data similar to the original data and fake data based on real data [10]. GAN has attracted significant attention recently due to its remarkable ability to generate highly realistic and new data for various applications, such as image and video synthesis and text generation [11]. GAN can create realistic images that are practically indistinguishable from real images and helps implement computer vision, games, and virtual reality applications. Furthermore, GAN can generate consistent and grammatically correct text in natural language processing [12], making it useful in applications like chatbots and automated writing [13].
GPT is a natural language processing model based on multilayer deep neural networks and is mainly used for various language-related tasks such as text generation, translation, and summarization [9]. ChatGPT is a generative AI developed by OpenAI based on GPT. It is a large-scale language model that uses a neural network architecture called a transformer, one of the most powerful generative AIs, and it has recently gained vast popularity [9,11]. ChatGPT is a generative AI that recognizes language patterns, uses them to generate new texts, learns how to engage in open-ended conversations with humans, and can answer a wide range of users’ questions with its vast knowledge base [9]. ChatGPT based on GPT-4 (mainly used today) can have natural and consistent conversations with users and contains knowledge on various topics. Furthermore, a commercial version of ChatGPT based on GPT-4o can understand and process more sophisticated and complex contexts, converse in various languages, and reflect cultural backgrounds [7].
As shown above, generative AI has excellent performance in various fields; thus, research on collaboration with AI is gaining attention.

2.2. Collaboration with the AI

Several studies have investigated the topic of collaboration between humans and AI or robots since the 2000s [14,15,16]. These studies can be divided into three categories. First, research has explored the possibility of collaboration between humans and AI and the effectiveness of collaboration. These studies predict that collaboration between humans and AI will be essential and highly effective in various fields. For example, Wang et al. [15] expressed that collaboration between humans and AI systems will occur in data science; thus, automation and human expertise are both essential. Sowa et al. [17] confirmed the hypothesis that human-AI collaboration generally improves and increases knowledge industry productivity. They argued that AI’s future should focus on a collaborative approach in which humans and AI work closely together rather than aiming for complete automation. Moreover, Lai et al. [18] found that human–AI collaboration in healthcare could alleviate the shortage of qualified healthcare workers, support overworked healthcare professionals, and improve the quality of care.
Second, other studies have examined AI design for collaboration between humans and AI. These studies explore how AI can be designed to facilitate effective collaboration between humans and AI, mainly focusing on interaction methods or explainable AI design [19]. For example, Fan et al. [20] emphasized AI design from a user experience (UX) perspective to facilitate effective collaboration between humans and AI. Furthermore, explainable AI design has attracted significant attention, especially in medicine [21].
The third category assumes that collaboration between humans and AI is essential and effective; these studies empirically analyze factors that promote collaboration between humans and AI. For example, Cai et al. [14] conducted an empirical study on collaboration between medical experts and diagnostic AI assistants, arguing that AI transparency is important for collaborative decision-making. Additionally, Glikson and Woolley [22] suggested that trust is essential for collaboration between humans and AI. Their literature review revealed that tangibility, transparency, and reliability help form cognition-based trust. Analyzing the judgment ratio of humans and machines, Haesevoets et al. [23] revealed that trust in machines would be a significant factor in collaboration. Vössing et al. [24] also emphasized the importance of trust in collaboration between humans and AI and suggested transparency as a prerequisite for strengthening trust. Moreover, Zhang et al. [16] analyzed previous studies, confirming that human distrust is the biggest obstacle to cooperation between humans and AI. Through controlled experiments, they revealed that trust and experience in AI are important factors in cooperation. As mentioned, numerous previous studies confirmed that trust in AI is a key factor in having the intention to cooperate.

2.3. Trust and Its Antecedents

Trust has been studied in various social science fields, such as psychology, sociology, and business administration, and plays a role in strengthening cooperation and weakening negative confrontation by positively changing social exchange relationships based on mutual belief [25,26]. Research on trust began by examining relationships between people and between people and organizations, expanding to online e-commerce relationships since the 2000s. In particular, since Gefen et al.’s [27] research on trust in the online domain has been actively conducted. The role of trust is expanding to the area of collaboration with AI or robots with the flow of technological development [22].
Gefen et al. [27] conducted an empirical analysis based on research on trust in various academic fields. They proposed five antecedents of trust in the electronic commerce domain: personality-based trust, calculative-based trust, cognition-based trust, knowledge-based trust, and institution-based trust. They posited that personality-based trust is a concept derived from psychology and refers to the tendency to trust others based on their tendency to trust. In other words, whether trust is formed depends on the disposition of the trustor rather than the other party’s behavior. In general, personality-based trust occurs in the initial relationship before a rational basis based on experience is provided. Calculative-based trust is a concept derived from economics and starts from the idea that the trust-building mechanism includes a computational process. In other words, it is a theory that trust is formed through a rational evaluation of the costs and benefits that occur in the process of interaction [26]. Cognition-based trust is a concept derived from social psychology; it is created through cognitive evaluation [28] by evaluating the other party’s ability, career, and reliability. Knowledge-based trust is formed based on experience and information about the other party because the other party’s behavioral pattern can be predicted. Therefore, the better an individual knows the other party, the higher the predictability, increasing trust [26]. Finally, institutional-based trust is a concept derived from sociology, referring to trust felt due to institutional devices such as guarantees or regulations inherent in a specific situation [29,30]. In other words, it refers to a situation in which individuals or organizations can trust the other party because a trustworthy system has been established even though they do not have a direct relationship [30].
In this study, the trust theories presented in the study of Gefen et al. [27] are used as the theoretical basis for deriving the antecedent factors of trust in generative AI. While several trust theories in human–AI interaction focus on psychological traits or behavioral measures, these are often suited to experimental or longitudinal designs and less applicable to survey-based research. In contrast, Gefen et al. [27] offer a structured, empirically grounded framework that supports quantitative analysis using structural equation modeling (SEM) and aligns well with technology-mediated environments.
Regarding personality-based trust, Gefen et al. [27] dealt with initial trust based on judgment before use. Our study deals with experienced people; therefore, personality-based trust was excluded. We also excluded institution-based trust because the current generative AI has limitations in providing situational normality and structural guarantees. Meanwhile, trust based on social influence should not be overlooked when discussing trust in recommender systems like generative AI [31]. Social influence has already been revealed as an important factor affecting trust in various information systems (IS) [32] Therefore, this study uses calculative-based, cognition-based, knowledge-based, and social influence-based trust as the fundamental theories for deriving the antecedent factors of trust in generative AI.

2.3.1. Calculative-Based Trust Antecedents

Calculative-based trust is a trust formed by calculating gains and losses. This study presents performance as a variable conceptualizing calculative-based trust in generative AI. Performance is a quantitative evaluation of outputs versus inputs measured by how efficiently it operates within given resources. In the case of generative AI, performance focuses on the generated output and is performed by the user’s evaluative judgment [33]. In other words, whether the output of generative AI matches this study’s requirements and purpose well. The performance criteria of generative AI include whether generative AI produces better output than humans or whether generative AI has fewer errors and is more accurate than humans [34]. In other words, performance is a calculated or compared result; therefore, we apply the performance as an antecedent for the calculative-based trust.

2.3.2. Cognition-Based Trust Antecedents

Cognition-based trust is trust generated by cognitive evaluation [28], and trustworthiness is determined by evaluating the other party’s ability, career, and reliability. Previous studies on AI or systems found that reliability and understandability are significant variables affecting trust [24,33,35]. Therefore, this study selected reliability and understandability as the detailed variables that form cognition-based trust. Reliability is a concept different from trust and refers to the reliability of the result; it refers to the extent to which the result produced by generative AI is recognized as consistent and predictable [35]. Understandability refers to how users interpret and understand generative AI results [35]. Cognition-based trust addresses the outcome’s interpretation, judgment, consistency, and understandability.

2.3.3. Knowledge-Based Trust Antecedents

Knowledge-based trust is based on information, predictability, and understanding of the other party [26]. This concept means understanding the elements and processes of a specific other party, and understanding of a specific party can be expressed as familiarity [36], which many IS studies have used as a variable of knowledge-based trust [27,37]. Therefore, this study also presents familiarity as a variable of knowledge-based trust. Familiarity increases trust by increasing the degree to which the subject is understood through the accumulation of experience and understanding of what is happening [27]. In other words, when users feel familiar with a device, they think they know the machine well and trust the device based on that [27].

2.3.4. Social Influence-Based Trust Antecedents

Rather than individuals directly evaluating others, social relationships form social influence-based trust, the opinions of people around them, and the norms and values of the group. In the IS literature, social influence refers to the influence of people on an individual’s perception that they should use a new IS [38]. Many areas of social science have examined the relationship between social influence and trust. In particular, social influence and subjective norms have been identified as important factors in many studies on trust and behavioral intention [34,39,40]; therefore, this study also includes social influence as a significant antecedent of trust.

2.4. Perceived Usefulness

This study adopts the perceived usefulness of the technology acceptance model (TAM) as a technological element of human–AI collaboration. TAM is a theoretical model that explains the major factors affecting users’ acceptance and use of new technologies; it comprises two major variables: perceived usefulness and perceived ease of use. TAM explains technology acceptance by evaluating how easily users learn new technologies and how helpful such technologies are in their work or daily lives. Studies on accepting new technologies have mainly used these factors [41].
The perceived usefulness of a generative AI indicates how much the user believes that the system can help improve their work performance when using the generative AI. Previous studies have shown that users’ intention to use the system increases when they believe that the results provided by the system can improve work efficiency [38]. The system’s usefulness is an important motivation for users to adopt the technology. In particular, when the positive aspects provided by the system, such as work efficiency and saving time, are revealed, the user’s intention to accept it naturally increases [42]. In this context, the perception of generative AI’s usefulness will have a decisive influence on the intention to collaborate with the AI system and the intention to use the system. Research on collaboration with AI is already being conducted in many practical areas, such as AI diagnosis systems and image processing [43,44].

3. Research Model and Hypothesis Development

3.1. Research Model

This study proposes a research model based on trust and perceived usefulness to analyze the factors affecting the collaboration between humans and generative AI. Based on four existing trust theories (calculative-based, cognition-based, knowledge-based, and social influence-based trust), five variables were derived: performance, reliability, understandability, familiarity, and social influence. A research model (Figure 1) was constructed in which these variables affect trust and perceived usefulness.

3.2. Hypothesis

This study considers performance as a variable of the calculative-based trust. Performance evaluates how well a machine works; if this criterion is likened to a person, it corresponds to ability. Generally, if a person can handle things well, they naturally gain trust because they give the impression of being responsible and competent. Therefore, in the organizational literature, ability has long been presented as a strong antecedent of trust, integrity, and benevolence [45]. Similarly to humans, machines that perform well can be considered trustworthy. If the results produced by generative AI are satisfactory, people will evaluate the system’s performance highly, and as a result, trust in the system will increase. Accordingly, we propose hypothesis H1.
H1: 
Performance positively affects trust.
Madsen and Gregor [35] conducted an empirical analysis to verify that the high performance of generative AI affects trust. In the IS literature, performance has been presented as a factor that forms the perceived usefulness of specific information technology [41] and has also been considered a factor that forms system quality [46]. According to the IS success model [47], the quality of a system is suggested to affect perceived usefulness; therefore, we propose hypothesis H2.
H2: 
Performance positively affects perceived usefulness.
This study uses reliability and understandability as variables that constitute cognition-based trust. Reliability generally refers to the degree to which the expected performance is consistently achieved over a specific period under given conditions [48]. In the case of generative AI, reliability refers to the degree to which the produced results are perceived to be consistent and predictable [35]. A system that provides consistent results is trustworthy; thus, reliability has long been suggested as having an important impact on trust in the system [49]. Furthermore, reliability has been suggested and empirically analyzed as a factor affecting trust in IT research [35]. Based on this reasoning, we propose hypothesis H3.
H3: 
Reliability positively affects trust.
Reliability has also been considered a factor forming service quality in the IS literature [50]. Reliable systems consistently perform as expected. Reliable systems consistently perform as expected. This enables users to achieve their goals more efficiently and effectively, thereby enhancing their perception of the system’s usefulness. Prior studies have shown that service quality positively influences perceived usefulness [47]. Therefore, we propose hypothesis H4.
H4: 
Reliability positively affects perceived usefulness.
Understandability is a superordinate concept of transparency and explainability as it represents the extent to which users can interpret and understand the results of generative AI [35]. Previous AI studies [51,52] emphasized transparency and explainability to secure trust in AI by resolving doubts about the black box area; however, in collaboration with today’s generative AI, understandability is emphasized more than transparency or explainability [52]. In other words, users can interpret and understand the results created by generative AI, which can be a decisive factor in trust for collaboration. Thus, we propose hypothesis H5.
H5: 
Understandability positively affects trust.
Furthermore, when users can interpret and understand the outputs of generative AI, they are more likely to apply them effectively to their tasks. This leads to improved performance and strengthens their perception of the system’s usefulness. Prior research highlights the importance of understandability in shaping perceived utility [35,52]. Accordingly, we propose hypothesis H6.
H6: 
Understandability positively affects perceived usefulness.
Familiarity refers to the degree to which an object is understood through accumulated experience [27]. Familiarity is known to increase trust by increasing the understanding of what is currently happening. When users feel familiar with a system, they believe they know it well and trust it based on that belief [27,37,53]. Familiarity is also expected to affect trust in generative AI. If a person frequently uses a specific generative AI and becomes familiar with it, they will trust the generative AI because they feel it is more convenient and predictable. Recent generative AI research has also empirically analyzed that familiarity affects trust in AI [54,55]. Based on this reasoning, we propose hypothesis H7.
H7: 
Familiarity positively affects trust.
Social influence is the process by which the attitudes, behaviors, and opinions of others influence individuals. According to the theory of social proof [56], people tend to trust individuals whom other people trust. Therefore, if people say that a particular system is trustworthy, trust in that system can be easily formed. A study of online stores analyzed reputation, a type of social influence, and its effect on trust in the online store [57]. Moreover, empirical analyses in many IS studies have shown that social influence affects trust [58]. Therefore, we propose hypothesis H8.
H8: 
Social influence positively affects trust.
Trust is an important antecedent of AI usage intentions in many previous studies [59], and trust is increasingly recognized as important due to the problems of early generative AI that failed to produce reliable results [22,60]. In the context of generative AI, trust means receiving and trusting the desired results [33]. In most AIs, humans are responsible for the results if a problem occurs during the use process; therefore, if users lack trust in generative AI, they tend not to use AI for major tasks. Conversely, a high level of trust in generative AI services can increase the intention to collaborate with generative AI [24,61]. Therefore, this study hypothesizes that trust positively affects the intention to collaborate with generative AI. Accordingly, we propose hypothesis H9.
H9: 
Trust positively affects the intention to cooperate.
Generative AI can establish trust by giving users promised results (quality, consistency) [62]. Trustworthy generative AI can be perceived as useful by people because it behaves predictably and provides desired results to users. Conversely, untrustworthy generative AI is a perception of a situation where the quality and consistency of the results are lost, which can lead to a decrease in the system’s usefulness. In other words, when trust in a specific object or system is built, positive evaluations and experiences in situations where the system is used increase, increasing its usefulness. Previous studies related to this [27,63] also support this. Recent studies on AI systems have also analyzed the fact that users who trust AI systems evaluate the usefulness of the services or information the system provides more highly [60,61]. Based on this reasoning, we propose hypothesis H10.
H10: 
Trust positively affects perceived usefulness.
Generative AI’s perceived usefulness indicates the belief that the system will help users improve their work performance. Previous studies have shown that when users believe the system’s output can improve their work efficiency, their intention to use it increases [38]. In other words, the system’s usefulness is an important motivation for users to adopt the technology. The user’s intention to accept it naturally increases, especially when the positive aspects provided by the system (such as work efficiency and saving time) are revealed [42]. In this context, the perception of generative AI’s usefulness should directly impact collaboration by inducing the active use of AI systems. Therefore, we propose hypothesis H11.
H11: 
Perceived usefulness positively affects the intention to cooperate.

4. Research Methodology

4.1. Data Collection

This study collected data through an online survey provided by the cloud-based social science research automation site (https://ssra.or.kr/), targeting experts, office workers, and graduate students mainly using generative AI. The subjects accessed the online survey through links connected to the web and mobile provided by the site. A total of 305 valid questionnaires were collected and used for analysis. This study employed a non-probability convenience sampling method, targeting individuals with practical experience or interest in generative AI. While this approach ensured relevance to the research context, it may introduce potential biases in participant recruitment and limit the generalizability of the findings to the broader population. The respondents included 153 males and 152 females; 33.1% of the respondents were experts, 31.1% were office workers, and 10.2% were students, including graduate students.
The generative AI category covered a wide range, including large-scale language learning models like ChatGPT, image generation AI like Midjourney, and embedded AI applied to MS Copilot and Notion. Participants responded that they mainly used ChatGPT (77.7%), image generation AI (9.8%), and AI included in different programs (12.5%). The percentage of paid usage experience was 35.1%. Table 1 presents detailed descriptive statistics of the respondents.

4.2. Measurement Development

To measure the constructs of the research model, this study developed measurement items based on reliable and validated scales from the prior literature. In addition to citing relevant sources, representative items used in the questionnaire are presented below to enhance clarity and transparency, as recommended.
Perceived usefulness was measured using items such as “Using generative AI improves work efficiency” and “Generative AI is useful for work” based on Davis [41]. Trust was assessed through items including “The outputs of generative AI are trustworthy” and “Overall, I trust generative AI” [33]. The two variables of cognition-based trust—reliability and understandability—were adopted from the study of Madsen and Gregor [35]. Reliability included statements such as “Generative AI performs reliably” and “Generative AI produces consistent outputs under the same conditions.” Understandability was measured with items like “The results from generative AI are simple and easy to understand” and “I have no difficulty understanding the outputs of generative AI.” Performance was assessed through items like “Generative AI produces better results than humans” and “The outputs from generative AI are as excellent as those of a competent person” drawing from Madsen and Gregor [35], Gursoy et al. [34], and Shin [33]. Furthermore, familiarity was evaluated using items such as “I am familiar with using generative AI” following Gefen et al. [27]. Social influence was measured with items like “My colleagues think it is desirable to use generative AI” and “The general social atmosphere is positive towards using generative AI” based on Venkatesh et al. [38]. Finally, intention to cooperate was measured using items such as “I will actively use generative AI in my work” and “I will collaborate with generative AI whenever possible”
The questionnaire items were measured using a 7-point Likert scale ranging from “strongly disagree” to “strongly agree.” A full list of the questionnaire items employed in this study is presented in the Appendix A.

5. Results

The analysis of the research model was conducted using the partial least squares structural equation modeling (PLS-SEM) approach using the PLSPM package in R [64]. The PLS-SEM approach focuses on exploring and explaining new causal relationships rather than confirming the theory. It supports the evaluation of the reliability and validity of the measurement indicators through outer model and inner model evaluations [65]. Therefore, this study employed the PLS-SEM approach to verify the reliability and validity of the measurement indicators and the causal relationships between the constructs of the research model.
The research model comprises eight latent variables: performance, reliability, understandability, familiarity, social influence, trust, perceived usefulness, and intention to cooperate. Each latent variable was measured using three to four observed indicators, as detailed in the Appendix A. Within the structural model, performance, reliability, understandability, familiarity, and social influence are treated as exogenous variables. Trust and perceived usefulness serve as mediating constructs, while intention to cooperate is modeled as the final endogenous variable. This configuration enables a comprehensive examination of the pathways through which these antecedents shape human-AI collaboration.

5.1. Reliability and Validity of the Measurement Items

This study used the composite construct reliability (CCR) value, mainly used in structural equation modeling, to evaluate the reliability of the measurement items for the constructs in the research model. We also used it with the Cronbach alpha coefficient, which is used to evaluate reliability in general social science research. Table 2 shows that all constructs’ CCR values and the Cronbach alpha coefficients far exceed the threshold value of 0.70 [66]. Both conditions for evaluating reliability were satisfied; thus, the measurement items included in the constructs were reliable.
This study performed confirmatory factor analysis (CFA) using PLS-SEM to evaluate the validity of the constructs in the research model. In CFA using PLS-SEM, convergent validity is guaranteed when the average variance extracted (AVE) of the construct to which each measurement item is assigned exceeds 0.50 [67]. Table 2 shows that the AVE values of all the constructs are higher than 0.5; thus, the convergent validity of the constructs of this research model is high. To secure discriminant validity, the square root of the AVE of each construct must be greater than the correlation with other constructs [67]. Table 3 shows that the square root of the AVE of all constructs in the research model is greater than the correlation with other constructs. Therefore, the discriminant validity of all constructs was proven.

5.2. Hypothesis Testing

We conducted a hypothesis test on the research model using measurement items verified for reliability, convergent validity, and discriminant validity. This research model was composed of a structural equation model, and the modeling was performed for hypothesis testing. Figure 2 presents the results of the structural equation modeling, and Table 4 presents the detailed hypothesis testing results.
Table 4 shows that performance, an antecedent of calculative-based trust, was confirmed to significantly affect trust and perceived usefulness (p < 0.01); therefore, Hypotheses 1 and 2 were accepted. Reliability, the first antecedent of cognition-based trust, had a positive (+) effect on trust (p < 0.01) but a negative (–) effect on perceived usefulness (p < 0.05). Therefore, Hypothesis 3 was accepted, but Hypothesis 4 was rejected. Understandability, the second antecedent of cognition-based trust, had a positive (+) effect on trust at the 0.05 level (p < 0.05). Furthermore, it positively (+) affected perceived usefulness at the 0.01 level (p < 0.01). Therefore, Hypotheses 5 and 6 were accepted. Familiarity, an antecedent of knowledge-based trust, had an insignificant path coefficient toward trust at the 0.05 level (p > 0.05). In contrast, social influence positively (+) affected trust (p < 0.01); therefore, Hypotheses 7 and 8 were accepted. Both perceived usefulness and trust positively (+) influenced the intention to collaborate with AI (p < 0.01). Furthermore, trust had a positive (+) effect on perceived usefulness (p < 0.01). Through this, Hypotheses 9–11 were all supported.

6. Discussion

This study established a model for the two most important aspects of collaboration between humans and generative AI: trust and perceived usefulness. We analyzed the results to determine how trust and perceived usefulness affect the intention to collaborate with generative AI and identify the antecedents of trust and perceived usefulness. The main results are as follows.
First, according to the results of this study, both trust and perceived usefulness were found to be antecedents that affect the intention to collaborate with generative AI. Like existing systems that apply IT technology, generative AI is also affected by perceived usefulness, as Davis [41] suggested. At the same time, trust is a key variable for collaboration. Furthermore, trust is a factor that affects perceived usefulness, as has been found in many previous studies [27,60,61,63].
Second, three significant dimensions were identified as antecedents of trust. All three dimensions of calculative-based trust, cognition-based trust, and social influence-based trust helped form trust in generative AI. In contrast, many studies have determined that familiarity will affect trust [27,37,53]; however, knowledge-based trust did not. This result could be due to the characteristics of the respondents in this study. Most respondents had experience with generative AI and were already familiar with it; therefore, they may have judged that factors other than familiarity are important in forming trust in generative AI.
Finally, the results of this study revealed that some of the antecedents of trust also worked as antecedents of perceived usefulness. The understandability of cognition-based trust and the performance of computation-based trust were antecedents of perceived usefulness. In contrast, the reliability of cognition-based trust was expected to affect perceived usefulness positively; however, it had a negative effect. This finding suggests that, in the context of generative AI, users may prioritize novelty and creativity over consistency. Repetitive or predictable outputs may feel less useful, as users often seek surprising or imaginative results that go beyond conventional expectations. This challenges the assumption that reliability always enhances usefulness and highlights the need to reconsider how trust dimensions function in creative AI environments. Unlike general information technologies, people find generative AI useful because of the special results that exceed people’s expectations, not because of the consistent results that people can expect. In other words, many people do not find it helpful when AI provides universal results.

6.1. Contributions and Implications

Research on trust and the TAM has been conducted for a long time, from traditional technology fields to recent cutting-edge technology. Research in the IT field has primarily focused on two key variables related to systems: usefulness and ease of use [38,41]. In addition to the existing research flow in the IT field, we added the trust variable (an essential requirement for AI collaboration) as an important factor, and combined it with the existing research flow. Furthermore, by revealing the factors affecting AI trust, we presented the criteria for classifying trust in AI use. Furthermore, the results have the following practical implications. First, companies that provide generative AI should develop systems that focus on users’ trust in generative AI. Consistent and high-quality results should be provided to increase cognition-based and calculative-based trust. In other words, actual performance is more important than a sense of familiarity with the system; therefore, development should focus on improving the qualitative aspects of AI results rather than on familiarity with use. Second, given the confirmed importance of trust based on social influence, strategies to enhance social awareness of generative AI use are necessary. Specifically, companies or institutions can share AI use cases and disseminate users’ positive experiences and evaluations to build social trust. Additionally, through educational programs and promotional activities that highlight the benefits and safety of AI use, user trust can be strengthened, thereby fostering a positive attitude toward collaboration with AI. Third, the findings of this study offer novel insights into how trust influences users’ intention to collaborate with generative AI. To foster trust, policymakers should establish clear guidelines and standards that promote the delivery of understandable and transparent AI outputs. Meanwhile, industry professionals and practitioners are encouraged to move beyond a narrow focus on technical performance and instead prioritize trust-enhancing features in system design.

6.2. Limitations and Recommendations for Future Research

This study’s results provide meaningful implications for researchers and practitioners, but they have the following limitations. First, this study aimed to investigate ways to secure trust in generative AI by revealing the multidimensional antecedents of trust in collaboration with generative AI. Therefore, the research model did not include the ethical aspects of AI or people’s psychological resistance, which means it may not be fully suitable as a comprehensive model that encompasses all dimensions of trust. Future expanded research that includes these variables is essential. Second, the research conducted in this study excluded emotional trust and focused on generative AI being a practical technology; however, in future research, it will be necessary to conduct empirical analysis by adding emotional trust factors as an exploratory dimension. This will also play an important role in analyzing a more comprehensive research model. Third, since the survey was conducted exclusively with Korean respondents, cultural factors may have influenced the observed relationships. As a result, similar findings may not necessarily be replicated in other national or organizational contexts. Future studies should conduct cross-cultural comparative research to further assess the generalizability of the proposed model. Finally, this survey was conducted in a situation where not many people actively utilize generative AI; therefore, there is a problem with the narrow scope of the survey subjects’ activities. Future research targeting experts in more practical fields is needed.

Author Contributions

H.-S.C.: methodology, data curation, writing—original draft, formal analysis. C.Y.: conceptualization, writing—review and editing, supervision, project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Research Funds of Mokpo National University in 2024.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated and/or analyzed in this study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare that they have no competing interests.

Appendix A

Perceived Usefulness
a1.
Using generative AI improves work efficiency.
a2.
Using generative AI enhances productivity.
a3.
Using generative AI allows for quicker task completion.
a4.
Generative AI is useful for work.
Performance
a5.
The results of generative AI fit my requirements and objectives well.
a6.
Generative AI produces better results than humans.
a7.
Generative AI makes fewer errors and is more accurate than humans.
a8.
The outputs from generative AI are as excellent as those of a competent person.
Reliability
a9.
Generative AI performs reliably.
a10.
Generative AI produces consistent outputs under the same conditions.
a11.
I can rely on that generative AI is working properly.
a12.
Generative AI analyzes problems consistently.
Understandability
a13.
The outputs generated by generative AI are clearly structured.
a14.
The results from generative AI are simple and easy to understand.
a15.
If I lack understanding of the results, I can get sufficient explanations through generative AI.
a16.
I have no difficulty understanding the outputs of generative AI.
Familiarity
a17.
I am familiar with using generative AI.
a18.
I am familiar with the work methods using generative AI.
a19.
I am familiar with working with generative AI.
Social Influence
a20.
People I consider wise prefer using generative AI.
a21.
My colleagues think it is desirable to use generative AI.
a22.
Many people believe that generative AI should be used in work whenever possible.
a23.
The general social atmosphere is positive towards using generative AI.
Trust
a24.
The outputs of generative AI are trustworthy.
a25.
As a partner for collaboration, generative AI is reliable.
a26.
Overall, I trust generative AI.
Intention to Cooperate
a27.
I will actively use generative AI in my work.
a28.
I will use generative AI in my work rather than working alone.
a29.
I will collaborate with generative AI whenever possible.

References

  1. Banh, L.; Strobel, G. Generative artificial intelligence. Electron. Mark. 2023, 33, 63. [Google Scholar] [CrossRef]
  2. Feuerriegel, S.; Hartmann, J.; Janiesch, C.; Zschech, P. Generative ai. Bus. Inf. Syst. Eng. 2024, 66, 111–126. [Google Scholar] [CrossRef]
  3. Ji, Z.; Lee, N.; Frieske, R.; Yu, T.; Su, D.; Xu, Y.; Fung, P. Survey of hallucination in natural language generation. ACM Comput. Surv. 2023, 55, 1–38. [Google Scholar] [CrossRef]
  4. Sahoo, N.R.; Saxena, A.; Maharaj, K.; Ahmad, A.A.; Mishra, A.; Bhattacharyya, P. Addressing Bias and Hallucination in Large Language Models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy, 20–25 May 2024; pp. 73–79. [Google Scholar]
  5. Fui-Hoon Nah, F.; Zheng, R.; Cai, J.; Siau, K.; Chen, L. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. J. Inf. Technol. Case Appl. Res. 2023, 25, 277–304. [Google Scholar] [CrossRef]
  6. Susarla, A.; Gopal, R.; Thatcher, J.B.; Sarker, S. The Janus effect of generative AI: Charting the path for responsible conduct of scholarly activities in information systems. Inf. Syst. Res. 2023, 34, 399–408. [Google Scholar] [CrossRef]
  7. OpenAI. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar] [CrossRef]
  8. Kipp, M. From GPT-3.5 to GPT-4. o: A Leap in AI’s Medical Exam Performance. Information 2024, 15, 543. [Google Scholar] [CrossRef]
  9. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. Opinion Paper:“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  10. Lin, X.; Liang, Y.; Zhang, Y.; Hu, Y.; Yin, B. IE-GAN: A data-driven crowd simulation method via generative adversarial networks. Multimed. Tools Appl. 2023, 83, 1–34. [Google Scholar] [CrossRef]
  11. Kar, A.K.; Varsha, P.S.; Rajan, S. Unravelling the impact of generative artificial intelligence (GAI) in industrial applications: A review of scientific and grey literature. Glob. J. Flex. Syst. Manag. 2023, 24, 659–689. [Google Scholar] [CrossRef]
  12. Man, K.; Chahl, J. A review of synthetic image data and its use in computer vision. J. Imaging 2022, 8, 310. [Google Scholar] [CrossRef]
  13. Jabbar, A.; Li, X.; Omar, B. A survey on generative adversarial networks: Variants, applications, and training. ACM Comput. Surv. (CSUR) 2021, 54, 1–49. [Google Scholar] [CrossRef]
  14. Cai, C.J.; Winter, S.; Steiner, D.; Wilcox, L.; Terry, M. Hello AI: Uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–24. [Google Scholar] [CrossRef]
  15. Wang, D.; Weisz, J.D.; Muller, M.; Ram, P.; Geyer, W.; Dugan, C.; Tausczik, Y.; Samulowitz, H.; Gray, A. Human-AI collaboration in data science: Exploring data scientists’ perceptions of automated AI. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–24. [Google Scholar] [CrossRef]
  16. Zhang, G.; Chong, L.; Kotovsky, K.; Cagan, J. Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. Comput. Hum. Behav. 2023, 139, 107536. [Google Scholar] [CrossRef]
  17. Sowa, K.; Przegalinska, A.; Ciechanowski, L. Cobots in knowledge work: Human–AI collaboration in managerial professions. J. Bus. Res. 2021, 125, 135–142. [Google Scholar] [CrossRef]
  18. Lai, Y.; Kankanhalli, A.; Ong, D. Human-AI collaboration in healthcare: A review and research agenda. In Proceedings of the 54th Hawaii International Conference on System Sciences, Maui, HI, USA, 5–8 January 2021. [Google Scholar]
  19. Wang, D.; Churchill, E.; Maes, P.; Fan, X.; Shneiderman, B.; Shi, Y.; Wang, Q. From Human-Human Collaboration to Human-AI Collaboration: Designing AI Systems that Can Work Together with People. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–6. [Google Scholar]
  20. Fan, M.; Yang, X.; Yu, T.; Liao, Q.V.; Zhao, J. Human-ai collaboration for UX evaluation: Effects of explanation and synchronization. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–32. [Google Scholar] [CrossRef]
  21. Holzinger, A.; Langs, G.; Denk, H.; Zatloukal, K.; Müller, H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2019, 9, e1312. [Google Scholar] [CrossRef]
  22. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  23. Haesevoets, T.; De Cremer, D.; Dierckx, K.; Van Hiel, A. Human-machine collaboration in managerial decision making. Comput. Hum. Behav. 2021, 119, 106730. [Google Scholar] [CrossRef]
  24. Vössing, M.; Kühl, N.; Lind, M.; Satzger, G. Designing transparency for effective human-AI collaboration. Inf. Syst. Front. 2022, 24, 877–895. [Google Scholar] [CrossRef]
  25. Rousseau, D.M.; Sitkin, S.B.; Burt, R.S.; Camerer, C. Not so different after all: A cross-discipline view of trust. Acad. Manag. Rev. 1998, 23, 393–404. [Google Scholar] [CrossRef]
  26. Lewicki, R.J.; Bunker, B.B. Developing and Maintaining Trust in Work Relationships. In Trust in Organizations: Frontiers in Theory and Research; Kramer, R.M., Tyler, T.R., Eds.; Sage Publications: Thousand Oaks, CA, USA, 1996; pp. 114–139. [Google Scholar]
  27. Gefen, D.; Karahanna, E.; Straub, D. Trust and tam in online shopping: An integrated model1. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  28. Lewis, J.D.; Weigert, A. Trust as a social reality. Soc. Forces 1985, 63, 967–985. [Google Scholar] [CrossRef]
  29. Shapiro, S.P. The social control of impersonal trust. Am. J. Sociol. 1987, 93, 623–658. [Google Scholar] [CrossRef]
  30. Zucker, L.G. Production of trust: Institutional sources of economic structure, 1840–1920. Res. Organ. Behav. 1986, 8, 53–111. [Google Scholar]
  31. Mei, J.P.; Yu, H.; Shen, Z.; Miao, C. A social influence based trust model for recommender systems. Intell. Data Anal. 2017, 21, 263–277. [Google Scholar] [CrossRef]
  32. Li, X.; Hess, T.J.; Valacich, J.S. Why do we trust new technology? A study of initial trust formation with organizational information systems. J. Strateg. Inf. Syst. 2008, 17, 39–71. [Google Scholar] [CrossRef]
  33. Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum. Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  34. Gursoy, D.; Chi, O.H.; Lu, L.; Nunkoo, R. Consumers acceptance of artificially intelligent (AI) device use in service delivery. Int. J. Inf. Manag. 2019, 49, 157–169. [Google Scholar] [CrossRef]
  35. Madsen, M.; Gregor, S. Measuring Human-Computer Trust. In Proceedings of the 11th Australasian Conference on Information Systems, Brisbane, Australia, 6–8 December 2000; Volume 53, pp. 6–8. [Google Scholar]
  36. Cloutier, J.; Kelley, W.M.; Heatherton, T.F. The influence of perceptual and knowledge-based familiarity on the neural substrates of face perception. Soc. Neurosci. 2011, 6, 63–75. [Google Scholar] [CrossRef]
  37. Gefen, D. E-commerce: The role of familiarity and trust. Omega 2000, 28, 725–737. [Google Scholar] [CrossRef]
  38. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  39. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  40. Yilmaz, F.G.K.; Yilmaz, R.; Ceylan, M. Generative artificial intelligence acceptance scale: A validity and reliability study. Int. J. Hum.—Comput. Interact. 2024, 40, 8703–8715. [Google Scholar] [CrossRef]
  41. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  42. Davenport, T.H.; Kirby, J. Only Humans Need Apply: Winners and Losers in the Age of Smart Machines; Harper Business: New York, NY, USA, 2016; pp. 1–281. [Google Scholar]
  43. Rajpurkar, P.; Hannun, A.Y.; Haghpanahi, M.; Bourn, C.; Ng, A.Y. Cardiologist-level arrhythmia detection with convolutional neural networks. arXiv 2017, arXiv:1707.01836. [Google Scholar] [CrossRef]
  44. Andriluka, M.; Uijlings, J.R.; Ferrari, V. Fluid annotation: A human-machine collaboration interface for full image annotation. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Republic of Korea, 22–26 October 2018; pp. 1957–1966. [Google Scholar]
  45. Mayer, R.C. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  46. DeLone, W.H.; McLean, E.R. Information systems success: The quest for the dependent variable. Inf. Syst. Res. 1992, 3, 60–95. [Google Scholar] [CrossRef]
  47. DeLone, W.H.; McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
  48. Mcknight, D.H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a specific technology: An investigation of its components and measures. ACM Trans. Manag. Inf. Syst. (TMIS) 2011, 2, 1–25. [Google Scholar] [CrossRef]
  49. Lankton, N.K.; McKnight, D.H.; Tripp, J. Technology, humanness, and trust: Rethinking trust in technology. J. Assoc. Inf. Syst. 2015, 16, 1. [Google Scholar] [CrossRef]
  50. Pitt, L.F.; Watson, R.T.; Kavan, C.B. Service quality: A measure of information systems effectiveness. MIS Q. 1995, 19, 173–187. [Google Scholar] [CrossRef]
  51. Von Eschenbach, W.J. Transparency and the black box problem: Why we do not trust AI. Philos. Technol. 2021, 34, 1607–1622. [Google Scholar] [CrossRef]
  52. Ferrario, A.; Loi, M. How explainability contributes to trust in AI. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 1457–1466. [Google Scholar]
  53. Luhmann, N. Familiarity, confidence, trust: Problems and alternatives. Trust Mak. Break. Coop. Relat. 2000, 6, 94–107. [Google Scholar]
  54. Horowitz, M.C.; Kahn, L.; Macdonald, J.; Schneider, J. Adopting AI: How familiarity breeds both trust and contempt. AI Soc. 2024, 39, 1721–1735. [Google Scholar] [CrossRef]
  55. Arce-Urriza, M.; Chocarro, R.; Cortiñas, M.; Marcos-Matás, G. From familiarity to acceptance: The impact of Generative Artificial Intelligence on consumer adoption of retail chatbots. J. Retail. Consum. Serv. 2025, 84, 104234. [Google Scholar] [CrossRef]
  56. Cialdini, R.B. Social Proof: Truths are Us. Influence: Science and Practice, 5th ed.; Allyn & Bacon: Boston, MA, USA, 2009; pp. 97–140. [Google Scholar]
  57. Jarvenpaa, S.L.; Tractinsky, N.; Vitale, M. Consumer trust in an Internet store. Inf. Technol. Manag. 2000, 1, 45–71. [Google Scholar] [CrossRef]
  58. Chaouali, W.; Yahia, I.B.; Souiden, N. The interplay of counter-conformity motivation, social influence, and trust in customers’ intention to adopt Internet banking services: The case of an emerging country. J. Retail. Consum. Serv. 2016, 28, 209–218. [Google Scholar] [CrossRef]
  59. Choi, J.; Park, J.; Suh, J. Evaluating the Current State of ChatGPT and Its Disruptive Potential: An Empirical Study of Korean Users. Asia Pac. J. Inf. Syst. 2023, 33, 1058–1092. [Google Scholar] [CrossRef]
  60. Choung, H.; David, P.; Ross, A. Trust in AI and its role in the acceptance of AI technologies. Int. J. Hum.–Comput. Interact. 2023, 39, 1727–1739. [Google Scholar] [CrossRef]
  61. Baroni, I.; Calegari, G.R.; Scandolari, D.; Celino, I. AI-TAM: A model to investigate user acceptance and collaborative intention in human-in-the-loop AI applications. Hum. Comput. 2022, 9, 1–21. [Google Scholar] [CrossRef]
  62. Ganesan, S. Determinants of long-term orientation in buyer-seller relationships. J. Mark. 1994, 58, 1–19. [Google Scholar] [CrossRef]
  63. Yoon, C. The effects of national culture values on consumer acceptance of e-commerce: Online shoppers in China. Inf. Manag. 2009, 46, 294–301. [Google Scholar] [CrossRef]
  64. Sanchez, G.; Trinchera, L.; Sanchez, M.G.; FactoMineR, S. Package ‘plspm’; Citeseer: State College, PA, USA, 2013. [Google Scholar]
  65. Hair, J.F.; Hult, G.T.M.; Ringle, C.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage Publications: Thousand Oaks, CA, USA, 2013. [Google Scholar]
  66. Bagozzi, R.P.; Yi, Y. On the evaluation of structural equation models. J. Acad. Mark. Sci. 1988, 16, 74–94. [Google Scholar] [CrossRef]
  67. Gefen, D.; Straub, D. A practical guide to factorial validity using PLS-Graph: Tutorial and annotated example. Commun. Assoc. Inf. Syst. 2005, 16, 5. [Google Scholar] [CrossRef]
Figure 1. Research model.
Figure 1. Research model.
Information 16 00856 g001
Figure 2. Path diagram of the research model.
Figure 2. Path diagram of the research model.
Information 16 00856 g002
Table 1. Descriptive statistics of respondent characteristics.
Table 1. Descriptive statistics of respondent characteristics.
MeasureValueFrequencyPercentage
GenderMale15350.2
Female15249.8
-305100.0
Age25–294815.7
30–396722.0
45–4911939.0
More than 507123.3
-305100.0
AI usage typeLanguage learning model23777.7
Image generation program309.8
In-app AI3812.5
-305100.0
ProfessionStudent (including graduate students)3110.2
Office worker9531.1
Expert10133.1
Self-employed278.9
Other5116.7
-305100.0
Paid usage experienceYes10735.1
No19864.9
-305100.0
Table 2. Reliability.
Table 2. Reliability.
ConstructItem No.C. Alpha *CCR **AVE ***
Performance40.8740.9140.726
Reliability40.8770.9160.731
Understandability40.8520.9000.692
Familiarity30.9140.9460.852
Social Influence40.9090.9360.786
Trust30.9010.9380.835
Perceived Usefulness40.9440.9600.856
Intention to Cooperate30.9620.9750.929
* Cronbach’s alpha, ** Composite Construct Reliability, *** AVE: Average Variance Extracted.
Table 3. Average Variance Extracted and Correlation Matrix.
Table 3. Average Variance Extracted and Correlation Matrix.
ConstructPFRELUSFAMSITRPUITC
PF(0.85)
REL0.76(0.86)
US0.710.75(0.83)
FAM0.430.490.52(0.92)
SI0.620.580.600.66(0.89)
TR0.770.770.720.550.75(0.91)
PU0.580.480.550.390.520.56(0.93)
ITC0.620.530.580.650.740.720.58(0.96)
Mean4.834.835.074.644.874.825.675.10
SD1.261.231.081.431.231.271.161.34
( ): Square root of AVE. PF: Performance, REL: Reliability, US: Understandability, FAM: Familiarity, SI: Social Influence, TR: Trust, PU: Perceived Usefulness, ITC: Intention to Cooperate.
Table 4. Hypothesis testing results.
Table 4. Hypothesis testing results.
HypothesisSignPath Coefficientt-Valuep-Value
H1.Performance -> Trust(+)0.2483.0550.001
H2.Performance -> Perceived Usefulness(+)0.3173.9090.000
H3.Reliability -> Trust(+)0.2893.2630.001
H4.Reliability -> Perceived Usefulness(+)−0.154−1.6680.048
H5.Understandability -> Trust(+)0.1101.9530.026
H6.Understandability -> Perceived Usefulness(+)0.2612.8840.002
H7.Familiarity -> Trust(+)0.0030.0680.473
H8.Social Influence -> Trust(+)0.3657.2260.000
H9.Trust -> Intention to Cooperate(+)0.58310.2660.000
H10.Trust -> Perceived Usefulness(+)0.2482.9780.002
H11.Perceived Usefulness -> Intention to Cooperate(+)0.2474.0470.000
Trust R2: 0.770. Perceived Usefulness R2: 0.394. Intention to Cooperate R2: 0.562.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chae, H.-S.; Yoon, C. Factors Affecting Human-Generated AI Collaboration: Trust and Perceived Usefulness as Mediators. Information 2025, 16, 856. https://doi.org/10.3390/info16100856

AMA Style

Chae H-S, Yoon C. Factors Affecting Human-Generated AI Collaboration: Trust and Perceived Usefulness as Mediators. Information. 2025; 16(10):856. https://doi.org/10.3390/info16100856

Chicago/Turabian Style

Chae, Hee-Sung, and Cheolho Yoon. 2025. "Factors Affecting Human-Generated AI Collaboration: Trust and Perceived Usefulness as Mediators" Information 16, no. 10: 856. https://doi.org/10.3390/info16100856

APA Style

Chae, H.-S., & Yoon, C. (2025). Factors Affecting Human-Generated AI Collaboration: Trust and Perceived Usefulness as Mediators. Information, 16(10), 856. https://doi.org/10.3390/info16100856

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop