Next Article in Journal
Evaluation of Security System Effectiveness Based on Petri Nets and Ordinary Differential Equations
Previous Article in Journal
Function of Bolts in Arching Process of Surrounding Rocks of Roadways and Its Application in Support Design for Large Section Gateways
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Influencing Factors of Users’ Willingness to Adopt GAI for Collaborative Decision-Making in Generative Artificial Intelligence Context

1
School of Economics and Finance, Hohai University, Changzhou 213200, China
2
Business School, Hohai University, Nanjing 211100, China
3
Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(19), 10322; https://doi.org/10.3390/app151910322
Submission received: 30 August 2025 / Revised: 20 September 2025 / Accepted: 22 September 2025 / Published: 23 September 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

Exploring the influencing factors and mechanisms of willingness to adopt GAI for collaborative decision-making in the generative artificial intelligence context is of significant importance for advancing the application of collaborative decision-making between human intelligence and generative AI. This study builds upon the traditional Technology Acceptance Model (TAM) and the Task–Technology Fit (TTF) models by introducing factors of human–GAI trust and collaborative efficacy to construct a theoretical model of the influencing factors of willingness to adopt GAI for collaborative decision-making. Empirical analysis is conducted using Structural Equation Modeling (SEM) and Fuzzy-set Qualitative Comparative Analysis (fsQCA). The results show that perceived usefulness and collaborative efficacy emerge as key determinants of willingness to adopt GAI for collaborative decision-making. Attitude and human–GAI trust exert significant direct positive effects, while perceived ease of use and task–technology fit demonstrate significant indirect positive influences. The fsQCA results further identify three distinct configuration pathways: perceived value-driven, functional compensation-driven, trust in technology-driven.

1. Introduction

With the rapid advancement of science and technology, generative artificial intelligence (GAI) is profoundly transforming human thought and work modes. As an innovative technology, GAI possesses multimodal data processing and generation capabilities—encompassing text, images, audio, and video along with strong self-learning and rapid iteration capacities. It has progressively permeated many levels of the human decision-making process. Traditionally, computational decision-making followed a closed and static paradigm of machine-assisted decision-making, characterized by one-way information flow: machines produced decision solutions based on predefined algorithms, whereas humans passively received information and executed decisions. This paradigm is typically limited to single or narrowly defined scenarios [1]. However, the emergence of GAI enables artificial intelligence to engage more deeply in decision-making processes. Through continuous interaction, humans and GAI jointly influence decision outcomes, making human–GAI collaborative decision-making tend toward optimality [2]. This dynamic interplay between human intelligence and GAI is reshaping the boundaries and processes of traditional decision making [3].
From the perspective of decision-making data sources, GAI leverages the cross-domain knowledge integration capabilities of large models to semantically analyze and distill knowledge from data across scenarios. Moreover, by combining out-of-domain information with traditional in-domain data, GAI enhances decision accuracy and facilitates a shift from closed- to open-domain big data. From the perspective of decision-making roles, GAI has evolved from merely serving as an information provider to functioning as an assistant in formulating decision solutions [4]. GAI can generate multiple feasible solutions tailored to the nature of the problem and target requirements and further provide risk–benefit analyses of each alternative [1]. From the perspective of the decision-making process, GAI continuously optimizes and generates decision solutions through reinforcement learning and counterfactual reasoning, while human decision-makers ensure the transparency of the process via iterative dialogue and negotiation facilitated by interactive GAI interfaces. Together, these elements establishes a dynamic, bidirectional, and collaborative decision-making system [5], transforming the process from a traditional linear model to a dynamic, interactive one. In summary, the joint participation of human intelligence and GAI in decision-making not only broadens access to decision-relevant information but also reconfigures role allocation within the process. This synergy leverages the complementary strengths of humans and GAI to enable deeper, more effective collaboration—commonly referred to as human–GAI collaborative decision-making.
Human–GAI collaborative decision-making refers to the integration of the complementary strengths of human intelligence (such as creativity, intuitive judgment, and emotional understanding) and generative artificial intelligence (including data processing, pattern recognition, and efficient computation) to enhance cognitive processes and optimize decision-making effectiveness through dynamic interaction. Ultimately, this synergy achieves mutually beneficial, or win–win, decision outcomes through collaboration between humans and GAI [6]. At present, GAI is becoming increasingly embedded in daily life, with more people than ever relying on it to guide decisions and actions. Financial professionals utilize GAI to enhance customer service efficiency and improve the accuracy of risk assessments. Educators and students apply it for automated evaluation of assignments and personalized tutoring, while travelers use GAI to generate highly customized itineraries [7]. As its applications continue to expand, GAI has gradually penetrated multiple critical domains, including educational assessment, medical diagnosis, financial risk management, and smart manufacturing, advancing human–GAI collaborative decision-making from a theoretical concept to widespread industrial practice.
However, the application and implementation of human–GAI collaborative decision-making still face several significant challenges. GAI inherently harbors issues such as data security threats, privacy breaches, and algorithmic bias [8]. Coupled with the ambiguous boundaries of human–GAI responsibility in decision-making processes [9] and the lack of clear regulatory frameworks and ethical guidelines [7], these problems erode decision-makers’ trust in GAI-generated solutions. This erosion of trust substantially diminishes the intention to collaborate, hindering the deeper development of human–machine collaboration.
Therefore, investigating the factors influencing users’ willingness to adopt GAI for collaborative decision-making is critical for advancing human–GAI collaborative decision-making applications. This study not only enriches the theoretical foundations but also provides actionable insights for designing collaborative decision-making models and optimizing intelligent decision-making systems in organizations.

2. Literature Review

GAI, utilizing the self-attention mechanism fundamental to transformer architectures, demonstrates exceptional proficiency in processing parallelized text and modeling dynamic semantic relationships. This capability enables the autonomous generation of multimodal content, including text, images, and audio from large-scale datasets, thereby providing comprehensive decision-making references. GAI has also emerged as a pivotal innovation in AI development, distinguished by its user-friendly interface, intuitive operation, and superior performance metrics [3]. Unlike traditional AI, which is constrained by predetermined rules and fixed decision boundaries [10], GAI is adept at analyzing probabilistic data distributions and generating novel solutions, significantly enhancing its utility as a decision support tool [11]. Moreover, recent advancements in machine learning and deep learning architectures have expanded AI’s capabilities beyond conventional data processing tasks to the creation of original, realistic, and creative outputs [12]. In addition, traditional AI typically depends on structured data for model construction and information processing within restricted functional applications [7]. By contrast, GAI can process unstructured and multimodal data, enabling it to support more complex and uncertain decision-making tasks. Collectively, these capabilities demonstrate GAI’s significant advantages in advancing human–machine collaborative decision-making, and this innovative decision-making paradigm has attracted widespread attention and in-depth exploration in the academic community.
Existing studies have revealed the technological breakthroughs brought about by GAI compared to traditional AI in human–GAI collaborative decision-making in multiple domains. In the government affairs field, Silva [13] provided practical solutions for scientific and precise decision-making in digital government by utilizing GPT-4 to automate and simplify the processing of government documents. In the healthcare field, Rao et al. [14] developed the Generation of Medical Image Reports (GenMI) system, which addresses the limitations of single-source data analysis inherent to conventional AI. By integrating multimodal clinical data with medical imaging, GenMI generates comprehensive diagnostic reports, simultaneously reducing clinician workload and improving diagnostic accuracy. Supply chain management represents another field benefiting from GAI integration. A study by Li et al. [15], involving 236 Chinese enterprises adopting GAI technologies, reported statistically significant improvements in both supplier and buyer coordination. These gains, in turn, translated into measurable improvements in overall supply chain performance, particularly in inventory turnover and order fulfillment rates.
In addition to demonstrating the enhancement of human–GAI collaborative decision-making in specific application scenarios, several studies have also emphasized the critical role of GAI in the decision-making process itself, focusing on decision-making behavior. On the GAI side of the collaborative decision-making behavior, scholars have examined the specific decision-making performance of GAI. Osborne et al. [16], through double-blind controlled experiments, found that although participants exhibited subjective bias against GAI-generated suggestions, objective evaluations showed that these suggestions were significantly superior to human benchmarks in terms of quality, validity, and accuracy. Similarly, Chen et al. [17], in analyzing four economic decision-making scenarios—risk, time, social, and food preferences—found that GPT demonstrated greater decision consistency than humans and that its rationality was relatively insensitive to random variations. Using a cognitive psychology framework, Binz et al. [18] reported that GPT-3 outperformed humans in multitasking decision-making but exhibited notable shortcomings in terms of causal reasoning. Together, these studies highlight the potential of GAI to address complex decision-making tasks. However, significant challenges remain, most notably the phenomenon of “model hallucination,” in which the system produces outputs that appear plausible but are irrelevant or erroneous. This issue undermines users’ trust in the reliability of knowledge generated by GAIs [19]. In response, Huang et al. [20] proposed the OPERA approach, which mitigates hallucinations in large language models using overtrust penalties and backtracking assignment strategies without requiring additional data or retraining. On the human side of the collaborative decision-making behavior, research has primarily examined how individuals perceive and respond to GAI collaborative decision-making. Zhang et al. [21] found that as GAI’s behavior becomes more similar to human behavior, individuals’ aversion to algorithmic decision-making diminishes, and their likelihood of relying on GAI correspondingly increases. Celiktutan et al. [22], through a series of experiments, uncovered a bias in self–other cognition during human–computer collaboration. Individuals tend to believe that they themselves use GAI primarily as a source of inspiration, while perceiving others as using it more for task outsourcing. This cognitive discrepancy directly gives rise to a double standard in humans’ acceptance of GAI.
The above studies show that research on human–GAI collaborative decision-making has explored both decision-making behaviors and practical applications, highlighting the potential of GAI to enhance the quality and efficiency of decision-making. Nevertheless, the existing literature has largely concentrated on evaluating technical capabilities and analyzing applications. While some studies have touched upon the behavioral mechanisms underlying GAI’s role in decision-making, few have examined users’ willingness or motivation to engage in collaborative decision-making with GAI. In particular, there is a lack of systematic investigation into users’ psychological acceptance of and expectations for GAI-assisted decision-making.
To address this gap, the study integrates the Technology Acceptance Model (TAM) and the Task–Technology Fit (TTF) model to develop a conceptual framework for understanding the factors influencing users’ willingness to adopt GAI for collaborative decision-making. Structural Equation Modeling (SEM) is employed to analyze the relationships among these factors, while Fuzzy-set Qualitative Comparative Analysis (fsQCA) is used to identify multiple configurational paths that lead to high decision-making willingness. This mixed-methods approach allows for a more comprehensive exploration of the key factors and interaction patterns shaping users’ adoption of GAI in collaborative decision-making contexts.

3. Research Model and Hypotheses

3.1. Research Model

The Technology Acceptance Model (TAM), introduced by Davis (1989) [23], is grounded in the Theory of Reasoned Action and incorporates elements from Expectancy Theory and Self-Efficacy Theory. This model identifies two key determinants of technology adoption: perceived usefulness and perceived ease of use [23]. According to TAM, these two factors jointly shape users’ attitudes toward a given technology, which in turn influence their behavioral intention to adopt it. Both perceived usefulness and perceived ease of use are further shaped by external variables. The Task–Technology Fit (TTF) model, proposed by Goodhue, posits that users are more likely to adopt an information technology when its functionalities align effectively with the tasks they need to perform [24]. Both TAM and TTF have been widely applied in studies examining user behavior toward GAI, including educators’ adoption intentions [25], university students’ willingness to use [26], and factors influencing sustained usage and the complexity of independent learning [27]. Willingness to adopt GAI for collaborative decision-making is essentially an extension of technology acceptance into the domain of intelligent decision-making. It reflects an individual’s behavioral inclination to engage with GAI as a collaborative partner in completing decision-making tasks. Therefore, building on the theoretical foundations of TAM and TTF, this study investigates the mechanisms underlying human–GAI collaborative decision-making willingness, aiming to elucidate the factors that shape such behavioral intentions.
Building on the characteristics of human–GAI collaborative decision-making within the GAI context, this study extends the traditional TAM in several theoretical directions. First, it integrates the TTF model and innovatively introduces two external variables—task–technology fit and collaborative efficacy—to account for the dynamics of human–GAI collaborative decision-making [28]. Second, recognizing the critical role of trust in human–GAI interactions [29], the model incorporates human–GAI trust as a core construct. Additionally, the study controls for demographic and usage-related variables, including gender, age, education level, and frequency of engaging in GAI-supported collaborative decision-making. In summary, this study proposes a theoretical model that delineates the factors influencing users’ willingness to adopt GAI for collaborative decision-making in the context of GAI, as illustrated in Figure 1.
Figure 1 illustrates the relationships among the key variables. The external factors of task–technology fit and collaborative efficacy influence perceived usefulness and perceived ease of use, which subsequently affect attitudes and the willingness to adopt GAI for collaborative decision-making. Additionally, human–GAI trust directly impacts the willingness to engage in such collaborative decision-making.

3.2. Research Hypotheses

3.2.1. Perceived Usefulness and Perceived Ease of Use

Perceived usefulness refers to users’ subjective evaluation of GAI’s capability to enhance decision-making efficiency and improve the quality of outcomes within collaborative decision-making processes. Specifically, when users believe that GAI can effectively process large volumes of complex data, generate diverse decision alternatives, or compensate for its limitations, they are more likely to form favorable evaluations of the technology. This, in turn, reduces perceived uncertainty in collaborative decision-making and fosters positive attitudes toward using GAI, ultimately enhancing users’ behavioral intention to engage in human–GAI collaboration. Prior research has consistently supported the influence of perceived usefulness on users’ attitudes and behavioral intentions. For example, Belanche et al. [30] found that in the financial services domain, users’ perceptions of the usefulness and ease of use of robot advisors positively impacted their attitudes toward adoption. Similarly, Shata and Hartley [25] demonstrated that higher education faculty members’ perceptions of GAI’s usefulness and ease of use directly influenced their attitudes toward its adoption.
Perceived ease of use refers to user’s subjective evaluation of the interactive friendliness and operational simplicity of GAI-supported collaborative decision-making. When the GAI interface allows users to obtain decision recommendations without the need for complex commands, it lowers the technological barrier to entry and increases user acceptance. Moreover, if GAI demonstrates low learning costs and high response efficiency during human–GAI collaboration, users are more likely to form favorable evaluations of its decision-making capabilities [31], thereby strengthening their perception of its usefulness. Accordingly, the following hypotheses are proposed:
H1: 
Perceived usefulness positively affects attitude.
H2: 
Perceived ease of use positively influences perceived usefulness.
H3: 
Perceived ease of use positively influences attitude.

3.2.2. Attitude

Attitude refers to users’ overall evaluation and psychological disposition toward establishing a collaborative decision-making relationship with GAI systems [32]. Dietvorst et al. introduced the concept of “algorithm aversion,” describing the tendency of users to rely on their own judgment even when algorithms objectively outperform humans [33]. In practice, users often exhibit contradictory psychological responses to the same stimuli. On one hand, due to algorithm aversion, individuals reject algorithmic recommendations prompted by algorithmic errors more strongly than those arising from human errors, increasing their likelihood of dismissing such suggestions. On the other hand, algorithmic appreciation leads users to trust and adopt algorithmic recommendations over human ones [34]. These psychological biases typically arise from differences in users’ tolerance for algorithmic errors, perceptions of algorithmic transparency, and perceived control over decision-making. Such responses directly shape users’ attitudes toward generative AI. When users hold a positive attitude toward GAI-supported collaboration, their behavioral tendency to initiate or participate in such interactions is significantly strengthened [35]. This relationship reflects both cognitive recognition of the technology’s value and affective acceptance of the collaborative partnership, jointly contributing to a stronger willingness to adopt GAI for collaborative decision-making. Supporting this, Brüns et al. [36] found that while GAI adoption in firms initially triggered negative consumer reactions, framing GAI as a support tool rather than a replacement for human labor significantly alleviated these concerns. Therefore, the following hypothesis is proposed:
H4: 
Attitude positively influences willingness to adopt GAI for collaborative decision-making.

3.2.3. Human–GAI Trust

Human–GAI trust refers to the degree of user confidence in the reliability of GAI technology, the accuracy of its outputs, and the controllability of dynamic interactions within the human–GAI collaborative decision-making process [37]. Prior research has confirmed that the level of trust humans place in GAI significantly influences human–GAI interactions [38], directly impacting behavioral outcomes such as decision-making, judgment, and evaluation. In the context of dynamic human–GAI collaborative decision-making, trust functions as both a psychological prerequisite for accepting GAI collaboration and a critical factor in sustaining long-term human–GAI partnerships. When users exhibit a high level of trust in GAI systems, they are more likely to delegate part of their decision-making authority to the technology, viewing it as a collaborative partner rather than merely an information tool [39]. Keding et al. [40] similarly found that when enterprise managers have high trust in the recommendations of AI-based consulting systems, they are more inclined to adopt such systems in guiding strategic decisions, including R&D investments. Conversely, issues such as erroneous outputs or privacy breaches during the decision-making process can trigger a crisis of trust [41], subsequently diminishing users’ willingness to adopt GAI for collaborative decision-making. Thus, trust plays a pivotal role in either facilitating or hindering users’ willingness to collaborate with GAI in decision-making contexts. Therefore, the following hypothesis is proposed:
H5: 
Human–GAI trust positively influences willingness to adopt GAI for collaborative decision-making.

3.2.4. Task–Technology Fit

Task–Technology Fit (TTF) refers to the degree to which the technical capabilities of GAI align with the requirements of a given decision-making task within the human–GAI collaborative decision-making process. According to TTF theory, when the functional characteristics of a technology effectively meet the specific demands of task execution, users are more likely to experience performance improvements, thereby increasing their willingness to adopt the technology [42]. With its advanced capabilities in language comprehension, logical reasoning, and content generation, GAI has demonstrated substantial task–technology fit and decision support potential in professional domains such as legal document drafting and medical diagnosis [43]. In collaborative decision-making contexts, when GAI’s technical functions align closely with users’ task requirements—such as by providing information that directly supports or optimizes a particular decision—users are more likely to perceive GAI as enhancing decision-making efficiency and quality. This strengthens their recognition of GAI’s value in decision-making processes. Moreover, a high degree of task–technology fit facilitates users’ understanding of GAI’s decision logic and functional boundaries, thereby reducing cognitive load and lowering learning costs. For instance, Al-Emran et al. [26] found that both task characteristics and technological features significantly affect TTF, which in turn positively influences students’ usage behavior of GAI. Liu Dawei et al. [44] reported that among university students using YouTube for online learning, higher TTF was associated with greater perceived usefulness and ease of use. Based on this evidence, the following hypotheses are proposed:
H6: 
Task–technology fit positively influences perceived usefulness.
H7: 
Task–technology fit positively influences perceived ease of use.

3.2.5. Collaborative Efficacy

Collaborative efficacy refers to the strength of users’ belief in the enhanced decision-making performance that results from the complementarity between GAI’s technological capabilities and their own abilities within the human–GAI collaborative decision-making process. According to social cognitive theory, efficacy reflects an individual’s perception and affirmation of their own capabilities, which directly influence behavioral choices and performance outcomes [45]. In addition, studies have shown that in traditional interpersonal collaboration settings, support and feedback from others can significantly enhance individuals’ perceived efficacy. For example, in educational contexts, collaborative teaching has been found to strengthen teachers’ instructional efficacy [46], while in learning environments, collaborative learning improves both learning efficacy and performance [47]. Similarly, in human–GAI collaborative decision-making contexts, the Computers as Social Actors (CASA) paradigm suggests that the intelligent suggestions, responsive feedback, and knowledge generation provided by GAI function as “social-like” support mechanisms. These interactions can be analogous to human collaboration and enhance users’ perceptions of the effectiveness of the collaboration process. Consequently, they increase users’ awareness of GAI’s functional usefulness and decision support capabilities, ultimately strengthening their willingness to adopt GAI for collaborative decision-making.
Specifically, users with a high sense of collaborative efficacy are more inclined to invest time in in-depth interactions with GAI and exhibit greater adaptability and initiative when faced with technological limitations or disagreements during decision-making. Rather than abandoning the collaboration, they tend to revise or augment GAI outputs using their own expertise. This behavior stems from users’ recognition of the complementary strengths of human–GAI collaboration, particularly the belief that GAI can help compensate for their cognitive limitations [48], thereby reinforcing their perception of its usefulness in decision-making. Moreover, a high sense of collaborative efficacy implies a stronger sense of control over the collaboration with GAI, making users less likely to experience frustration due to system feedback biases. This helps reduce the psychological burden during usage and improves their overall perception of GAI’s ease of use. However, it is important to note that GAI still faces limitations in handling complex decision-making tasks, which may result in misunderstandings or cognitive conflict. In particular, when there is a discrepancy between users’ expectations and the actual outputs, their positive evaluation of GAI’s overall efficacy may be undermined. So this weakens the formation of perceived usefulness of the system. Based on the above, the following hypotheses are proposed:
H8: 
Collaborative efficacy positively influences perceived usefulness.
H9: 
Collaborative efficacy positively influences perceived ease of use.

4. Methodology

4.1. Sampling and Data Collection

In this study, data were collected through a questionnaire survey. Prior to the formal survey, a pilot survey was conducted to assess the reliability and validity of the instrument. The pilot survey collected 120 questionnaires from respondents with human–GAI collaborative decision-making experience across diverse industry backgrounds. The questionnaire was revised and refined based on statistical analysis of the pilot survey data. First, principal component analysis was used for factor extraction. Items with factor loadings exceeding 0.4 were retained, while those exhibiting high cross-factor loadings were excluded to ensure that each item adequately reflected its corresponding latent variable [49]. Subsequently, an inter-item correlation analysis was conducted to eliminate items with excessively high or low correlations. This reduced interference from multicollinearity issues and ensured independence between the questionnaire items. Finally, reliability and validity tests were conducted on the processed pilot survey data. The results showed that all measurement items had factor loadings exceeding 0.7, Cronbach’s α and CR values both surpassed 0.8, and AVE values exceeded 0.6. The pre-survey yielded satisfactory results, making it suitable for formal research.
To ensure alignment between the study topic and the target population, a screening question—“Have you ever used GAI to solve a decision-making problem?”—was included. Only respondents with prior experience in human–GAI collaborative decision-making were eligible to participate in the full survey. To minimize social desirability and common method biases, all questionnaires were administered anonymously. Respondents were explicitly informed that there were no right or wrong answers, enhancing the authenticity of their responses. Furthermore, to reduce response sets and potential suggestive associations between items, the order of the measurement items for different variables within the questionnaire was randomized.
The formal data collection was conducted between 10 August and 31 October 2024, using the Questionnaire Star platform. A total of 545 questionnaires were collected. Based on insights from the pilot test, the questionnaire typically required 2–5 min to complete. Consequently, participants who completed the survey in an unusually short time (less than 100 s) were considered to have responded inattentively, and their data were classified as invalid. After rigorous screening, 73 invalid questionnaires were excluded, resulting in a total of 472 valid responses. The effective response rate was 86.61%. Among the valid respondents, 53.39% were male and 46.61% were female, indicating a relatively balanced gender distribution. Additionally, 85.59% of respondents were under the age of 40, and 88.56% held at least an undergraduate degree. In terms of GAI collaborative decision-making usage frequency, 31.14% reported using GAI 1–5 times per month, while 47.67% reported using it 6–10 times per month.

4.2. Measures

The questionnaire was developed based on context of human–GAI collaborative decision-making behavior. Measurement scales were drawn from both domestic and international academic literature to ensure scientific rigor and conceptual validity. It comprised seven latent variables and 24 items, all measured using a five-point Likert scale ranging from “Strongly Disagree” (1) to “Strongly Agree” (5). The specific sources of each variable and its measurement indicators are detailed in Appendix A. The scales for perceived usefulness (PU) and perceived ease of use (PEU) were adapted from Davis [23] and Belanche et al. [30]. The attitude (ATT) measure was based on Lee et al. [32]. The human–GAI trust (HAT) indicator drew upon Shata [25] and Keding et al. [40]. Task–technology fit (TTF) employed the scale items proposed by Al-Emran et al. [26]. Collaborative efficacy (CE) was measured using reference scales from Shaw [50] and Shahzad et al. [51]. Finally, the indicator for willingness to adopt GAI for collaborative decision-making (HADI) was adapted from Shata [25] and Lee et al. [32].

4.3. Analysis Methods

This study employs a mixed-methods approach that combines structural equation modeling (SEM) with fuzzy-set qualitative comparative analysis (fsQCA) to investigate the factors influencing willingness to adopt GAI for collaborative decision-making. These two methods differ in focus and are grounded in distinct research principles. SEM primarily examines linear relationships between variables, emphasizing the net effect of independent variables on dependent variables. Accordingly, SEM was employed in this study to validate the theoretical model and research hypotheses, specifically assessing the relationships between dependent variables such as perceived usefulness and perceived ease of use and the willingness to adopt GAI for collaborative decision-making. SPSS 26.0 and Amos 28.0 were first used to conduct reliability and validity tests and SES.
Although SEM is effective for analyzing the marginal “net effect” of independent variables, it is limited in addressing interdependencies among latent antecedents and asymmetric causal relationships. To overcome these limitations, this study incorporates fsQCA, a method grounded in holistic and set-theoretic principles that effectively handles complex causal structures [52]. As a case-driven methodology, fsQCA integrates qualitative and quantitative analyses, exploring the relationships between combinations of variables (configurations) and outcomes from a set-theoretic perspective [53]. It is particularly effective at capturing intricate complementarities among factors and provides a richer analytical perspective. Given that human–GAI collaborative decision-making may result from multiple intertwined factors, relying solely on SEM may not fully reveal these complex mechanisms. Therefore, fsQCA was employed to identify the configurational pathways that enhance willingness to adopt GAI for collaborative decision-making within GAI environments. The fsQCA analysis was conducted using fsQCA 4.1 software.

5. Results

5.1. Reliability and Validity Analyses

Reliability and validity analyses are essential to ensure the robustness and credibility of research findings. The key indicators used for assessing reliability and validity included Cronbach’s α, composite reliability (CR), and average variance extracted (AVE). Cronbach’s α is a widely used measure of internal consistency, reflecting the degree of agreement among items intended to measure the same construct. CR evaluates the overall reliability of a latent variable with respect to its measurement items, indicating the consistency of the items in capturing the underlying construct. AVE measures the proportion of variance that a latent variable explains in its associated items, providing an assessment of convergent validity. The results are shown in Table 1. The overall Cronbach’s α coefficient of the questionnaire was 0.92. Cronbach’s α values for each dimension ranged from 0.82 to 0.893, all of which surpassed the commonly accepted threshold of 0.80 [54], further confirming the high reliability of the measurement scales. The standardized factor loadings for all variables ranged from 0.777 to 0.834, satisfying the recommended minimum threshold of 0.70 [53]. Composite reliability (CR) values were all above 0.80, and average variance extracted (AVE) values exceeded 0.60, demonstrating good convergent validity of the measurement model. Furthermore, the discriminant validity of the constructs was confirmed by comparing the square root of the AVE for each construct with its correlations with other constructs. In addition, the results in Table 2 show that the square root of the AVE for each variable was greater than its corresponding inter-construct correlation coefficient. This indicates that the questionnaire also possessed good discriminant validity.
To mitigate the potential risk of common method bias arising from the consistency of data sources, questionnaire context, and item characteristics, this study employed Harman’s single-factor test. Harman’s single-factor test applies an unrotated exploratory factor analysis to the measurement scales of the questionnaire. It evaluates the extent of common method bias by examining the number of extracted factors, the variance explained by each factor, and the proportion of total variance accounted for by the first factor. A higher proportion of variance explained by the first factor indicates a greater risk of common method bias [55]. An unrotated exploratory factor analysis was conducted in SPSS using all measured items. The analysis extracted six factors with eigenvalues greater than 1, with the first factor accounting for 37.73% of the total variance, which is below the commonly accepted threshold of 40%. This indicates that no single factor overwhelmingly explains the variance in the data. Therefore, the results suggest that common method bias is not a significant concern in this study.

5.2. Assessment of the Structural Model

In this study, the research model and hypotheses proposed in the previous study were empirically tested, with the results presented in Table 3 and Figure 2. The fit indices presented in Table 3 were used to assess the model’s goodness of fit with the observed data, thereby supporting its validity and reliability. The values of key indices, including χ2/df, GFI, and CFI, all fall within the recommended thresholds, indicating that the research model demonstrates a satisfactory level of fit. Moreover, the path analysis results in Figure 2 reveal that the standardized path coefficients for all nine hypotheses are positive and statistically significant (p < 0.05), thereby providing empirical support for hypotheses H1 through H9.
Specifically, task–technology fit (β = 0.185, p < 0.05) and collaboration efficacy (β = 0.24, p < 0.01) exert significant positive effects on perceived usefulness. Similarly, task–technology fit (β = 0.27, p < 0.01) and collaboration efficacy (β = 0.376, p < 0.001) significantly and positively influence perceived ease of use. These findings indicate that when GAI aligns more closely with users’ decision-making needs, they develop a stronger perception of its value and experience a lower psychological sense of operational complexity. At the same time, GAI’s advanced information processing capabilities and multifunctionality enhance users’ decision-making efficiency in collaborative contexts, which strengthens their perceptions of the technology’s usefulness and ease of use.
Perceived ease of use (β = 0.479, p < 0.001) also has a significant positive impact on perceived usefulness. In turn, both perceived usefulness (β = 0.375, p < 0.001) and perceived ease of use (β = 0.272, p < 0.001) significantly influence users’ attitudes toward using GAI. Attitude toward use (β = 0.376, p < 0.001) significantly predicts users’ willingness to adopt GAI for collaborative decision-making. These findings are consistent with the core constructs of the TAM. When users perceive a technology as easy to use, they are more likely to believe it can provide tangible benefits, which reinforces their positive evaluation of the technology. Positive perceptions of value and ease of use operate as complementary mechanisms that strengthen favorable attitudes toward GAI. In essence, perceived ease of use and perceived usefulness function as mutually reinforcing factors that shape users’ willingness to adopt GAI for collaborative decision-making by influencing their attitudes toward adopting GAI. These attitudes, in turn, determine whether users are willing to collaborate with GAI in decision-making.
Human–GAI trust (β = 0.388, p < 0.001) also exhibits a significant positive effect on collaborative decision-making intention. This suggests that users’ trust in GAI substantially increases their willingness to collaborate with it in the decision-making process. Trust functions not only as the basis for accepting GAI-generated recommendations but also as a critical driver of sustained collaboration. The greater the level of trust users place in GAI, the stronger their willingness to interact with it during decision-making, which enhances their overall willingness to adopt GAI for collaborative decision-making.
Finally, the control variables—gender, age, education level, and frequency of using GAI for collaborative decision-making—were found to have no significant effect on users’ willingness to adopt GAI for collaborative decision-making.

5.3. Fuzzy-Set Qualitative Comparative Analysis

5.3.1. Calibration

Data calibration is a critical step in fsQCA, as it transforms variable values into fuzzy sets with membership scores ranging from 0 to 1. Calibration can be conducted using either direct or indirect methods. The direct method requires researchers to specify three qualitative thresholds as anchors for calibration, whereas the indirect method relies on qualitative assessments of the factors [52]. The choice of method depends on both the characteristics of the data and the underlying theoretical framework. In the absence of standardized reference criteria in the current research context, this study employed the direct calibration method, drawing on prior studies [53], and set the full membership, crossover, and full non-membership thresholds for each variable at 95%, 50%, and 5% of the sample, respectively. The calibration points for each variable are presented in Table 4. Additionally, since there is a problem of group attribution in the sample with an affiliation degree of exactly 0.5, based on the previous practice [56], in this study, the affiliation degree of 0.5 is added with 0.001 to avoid the uncertainty of group attribution.

5.3.2. Analysis of Necessary Conditions

Although analysis of sufficient condition is central to fsQCA, it is essential to first conduct a necessary condition analysis. The analysis is to assess whether a specific antecedent condition plays an indispensable role in the occurrence of the outcome variable. In general, a condition is considered necessary if its consistency score exceeds the threshold of 0.90 [52]. As shown in Table 5, the consistency scores for all antecedent variables fall below this threshold, indicating that none of the six influencing factors can independently explain the outcome variable. The six factors also cannot be considered necessary conditions for willingness to adopt GAI for collaborative decision-making. Therefore, a further sufficient conditions analysis is required to examine how combinations of factors jointly influence users’ willingness to adopt GAI for collaborative decision-making.

5.3.3. Analysis of Sufficient Conditions

First, an initial truth table with 26 possible combinations of the six antecedent conditions was constructed to examine how different configurations influence the willingness to adopt GAI for collaborative decision-making. Second, in terms of parameter settings, given that the sample size exceeds 150, which qualifies as a large sample, the analysis followed existing research guidelines [57]. Accordingly, the consistency threshold was set to 0.80, the PRI (Proportional Reduction in Inconsistency) consistency threshold to 0.75, and the case frequency threshold to 2. Following these settings, fsQCA generated complex, intermediate, and parsimonious solutions. Due to the excessive number of configurations in the complex solution, this study adopted the nested relationship comparison method between the intermediate and parsimonious solutions to identify core and peripheral (edge) conditions. If a condition appears in both the intermediate and parsimonious solutions, it is identified as a core condition; if it appears only in the intermediate solution, it is classified as an edge condition.
From Table 6, four configurational pathways were identified as contributing to high willingness to adopt GAI for collaborative decision-making. Notably, in all four pathways, variables appear either as core conditions or as the absence of edge conditions, and there are no configurations where edge conditions appear in the absence of core conditions. The overall solution consistency is 0.918, and the consistency of each individual pathway exceeds the recommended threshold of 0.85. Therefore, the four configurations can reliably explain high levels of collaborative decision-making willingness. The overall coverage is 0.65, suggesting that the four pathways collectively account for 65% of the observed cases, reflecting a strong explanatory capacity.
Based on the results in Table 6, this study categorizes the four configurational paths into three main types: cognitive value-driven, functional compensation-driven, and trust technology-driven.
(1)
Perceived value-driven. Path 1 (PU * ATT * CE) includes perceived usefulness, attitude toward use, and collaborative efficacy as core conditions. This configuration aligns with the theoretical pathway of “Collaborative Efficacy → Perceived Usefulness → Attitude toward Use” and has an original coverage of 55.7%, the highest among the four identified pathways. Path 1 indicates that when users perceive high usefulness and collaborative efficacy in GAI-supported decision-making, and simultaneously hold a positive attitude toward its use, their willingness to adopt GAI for collaborative decision-making is significantly enhanced. These findings suggest that a favorable perception of GAI’s utility and collaborative capacity, coupled with a positive behavioral attitude, directly drives users’ willingness to adopt GAI for collaborative decision-making processes.
(2)
Functional compensation-driven. Path 2 (PU * ~HAT * TTF * CE) includes perceived usefulness, task–technology fit, and collaborative efficacy as core conditions, while human–GAI trust is absent as a peripheral condition. This configuration suggests that even when users exhibit low levels of trust in GAI-supported decision-making, they may still develop a willingness to adopt GAI for collaboration. This willingness is driven by the combined benefits of decision-making effectiveness and task fit. In such cases, technological strengths may outweigh the lack of trust. This reflects a dynamic compensation mechanism, where the functional value of GAI helps to offset deficits in trust during collaborative decision-making.
When GAI demonstrates strong decision support capabilities and enables synergistic improvements in task performance, users may rely on GAI-based trust rather than psychological trust, thus increasing their willingness to collaborate. This pathway also underscores the configurational nature of user behavior, emphasizing that willingness to engage in human–GAI collaborative decision-making is shaped by the interplay of multiple factors. Although SEM results indicate that human–GAI trust has a direct and positive influence on collaborative willingness, the fsQCA findings reveal a more nuanced perspective. In certain contexts, the combination of perceived usefulness, task–technology fit, and collaborative efficacy can effectively compensate for low trust, promoting willingness to engage in human–GAI collaborative decision-making.
(3)
Trust in technology-driven. Path 3 (PU * PEU * HAT * TTF) and Path 4 (PU * PEU * HAT * CE) both feature perceived usefulness, perceived ease of use, and human–GAI trust as core conditions. These are supplemented by task–technology fit in Path 3 and collaborative efficacy in Path 4, which also serve as core conditions. The two paths exhibit similar raw coverage values—0.517 for Path 3 and 0.514 for Path 4—indicating comparable explanatory power. This configuration suggests that when users perceive high levels of usefulness, ease of use, and trust in GAI-supported decision-making, and when either task–technology fit or collaborative efficacy is also high, users are highly likely to exhibit a strong willingness to adopt GAI for collaboration. Additionally, both pathways highlight the synergistic effect between trust and technological factors in shaping collaborative decision-making behavior. However, the two paths emphasize different aspects of the decision-making context. Path 3 focuses on whether GAI can deliver personalized and efficient solutions that align with specific task requirements, emphasizing the role of task adaptation. In contrast, Path 4 centers on the benefit orientation of the collaboration process, indicating that users are more inclined to participate in GAI-assisted decision-making when they are convinced that it can lead to tangible performance gains.

5.3.4. Robustness Analysis

To validate the robustness of the research findings, this study conducts robustness analysis on the key parameters of the fsQCA analysis. Common approaches to robustness testing in QCA include adjusting the consistency threshold, case frequency threshold, and PRI consistency threshold [58]. In this study, adjustments were made to both the consistency threshold and the case frequency threshold. First, the consistency threshold was increased from 0.8 to 0.9 while keeping all other conditions constant. The results show that, for configurations leading to high willingness to adopt GAI for collaborative decision-making, the configuration paths remain unchanged, still yielding four distinct solution paths. Second, the case frequency threshold was raised from 2 to 3, again holding other conditions constant. The results similarly indicate that the adjusted grouping outcomes remained largely consistent. Therefore, the obtained analytical results can be considered relatively robust.

6. Conclusions, Discussion, and Implications

6.1. Conclusions

This study builds on the TAM and the TTF framework, while integrating human–GAI trust and collaborative efficacy to construct a theoretical model of the factors influencing willingness to adopt GAI for collaborative decision-making in GAI contexts. SEM was employed to examine the effects of these factors on collaborative decision-making willingness. Subsequently, fsQCA was applied to further explore the configurational pathways leading to collaborative decision-making willingness. Based on the empirical analyses, several key conclusions were drawn:
(1)
The SEM results show that task–technology fit, collaborative efficacy, perceived usefulness, perceived ease of use, attitude, and human–GAI trust all have significant positive effects on willingness to adopt GAI for collaborative decision-making. Complementing this, the fsQCA results identify perceived usefulness as a core condition across four high-willingness pathways, while collaborative efficacy appears as a core condition in three pathways. These findings underscore the central role of perceived usefulness and collaborative efficacy in shaping collaborative decision-making willingness. In particular, users’ positive perceptions of GAI in terms of decision support, decision quality, and decision efficiency emerge as core conditions for strengthening willingness to collaborate in decision-making.
(2)
The fsQCA results identified four high-willingness configuration pathways for adopting GAI in collaborative decision-making, which can be grouped into three categories. The cognitive value-driven pathway aligns with the theoretical model, emphasizing the role of collaborative efficacy in shaping users’ assessments of GAI’s usefulness in collaborative decision-making [48]. Users who developed positive cognitions regarding GAI’s practical value, collaborative efficacy, and attitudes toward its use exhibited significantly greater willingness to collaborate. The functional compensation-driven pathway demonstrates that the combination of external variables from the theoretical model—task–technology fit and collaborative efficacy—with perceived usefulness can compensate for users’ lack of human–GAI trust, leading decision-makers to accept collaboration primarily on functional grounds. This finding further confirms the central role of collaborative efficacy and perceived usefulness in shaping collaborative decision-making willingness. Finally, the trust in technology-driven pathway establishes a strong-drive configuration for collaborative decision-making willingness, grounded in perceived usefulness, perceived ease of use, and human–GAI trust, in combination with either task–technology fit or collaborative efficacy. Together, these pathways reveal that users may adopt preferences that are either task-oriented or outcome-oriented, depending on the decision-making context.

6.2. Discussion

By integrating SEM and fsQCA, this study identifies the factors influencing the willingness to adopt GAI for collaborative decision-making, thereby providing a robust theoretical foundation for future research in this field.
First, while prior research has predominantly emphasized users’ cognitive evaluations of GAI in decision-making settings, comparatively less attention has been given to their behavioral intentions concerning collaborative decision-making with GAI. This study helps fill that gap by enriching the literature on human–GAI collaborative decision-making behavior. Second, building on the TAM and TTF models, this study innovatively incorporated two additional influencing factors: human–GAI trust and collaborative efficacy. This expands the existing theoretical framework and offers a more accurate reflection of real-world scenarios involving human–GAI collaboration in the context of GAI. Methodologically, this research employs a mixed-methods approach to uncover the underlying mechanisms influencing the willingness to adopt GAI for collaborative decision-making. This hybrid analytic strategy offers a more holistic theoretical perspective on human–GAI collaboration. The findings reveal that perceived usefulness and collaborative efficacy serve as core drivers of willingness to participate in GAI-assisted decision-making. Furthermore, three distinct configurational types leading to high collaboration willingness were identified, providing novel insights into how different combinations of factors shape user intentions.
Nonetheless, this study has several limitations that we note as opportunities for future exploration. First, the sample is drawn primarily from China and is composed largely of younger, more highly educated individuals. Although this demographic represents early adopters of GAI technology, the sample characteristics still constrain the generalizability of the findings. Trust in and adoption of GAI may differ substantially across regions, cultures, and age groups, making it essential to validate these conclusions in more diverse populations. Future studies should therefore draw on broader and more heterogeneous samples to provide a more comprehensive understanding of willingness to adopt GAI for collaborative decision-making. Second, this study focused mainly on the effects of technological and individual-level factors, paying less attention to social influences. According to Sociotechnical Systems Theory (STS), the introduction and application of technology are not isolated processes but are embedded within sociocultural contexts, organizational structures, and value systems, which in turn exert reciprocal effects on social systems [59]. STS highlights four interrelated dimensions: technology, tasks, structure, and personnel. Building on this perspective, future research should incorporate social-level variables, such as policy environments and cultural factors, into analytical models to achieve a more holistic understanding of the mechanisms shaping willingness to adopt GAI for collaborative decision-making. Finally, the factors prioritized by decision-makers often diverge owing to domain-specific demands. A fruitful direction for future work would be to concentrate on empirical investigations in a particular sector, enabling a more detailed examination of context-dependent decision-making dynamics.

6.3. Implications

Based on the findings of this study, the following strategies and recommendations are proposed: (1) Enhance GAI algorithm models to improve decision-making collaboration efficiency. The results indicate that willingness to adopt GAI for collaborative decision-making is strongly influenced by users’ explicit perception of improvements in decision-making efficiency and quality. In this regard, intelligent organizations should enhance GAI’s decision support capabilities through innovations in algorithmic architecture and more efficient use of computational resources. Such enhancements will enable users to experience the tangible benefits of human–GAI collaboration during the decision-making process, thereby increasing organizational willingness to adopt collaborative decision-making systems. (2) Improve GAI interaction experiences to strengthen human–GAI trust. To foster trust, intelligent organizations should establish transparent communication mechanisms between users and GAI systems. These mechanisms can help users better understand the behavior and reasoning of GAI in decision-making contexts. Additionally, improving the explainability of GAI can further enhance user trust and acceptance by making its decision-making logic and underlying rationale more accessible. (3) Deepen scenario-specific adaptation of GAI to ensure technical task alignment. Intelligent organizations should focus on developing specialized GAI systems tailored to specific decision-making scenarios. By integrating domain-specific knowledge, such systems can significantly improve decision-making quality and address the common compatibility issues that arise when applying general purpose models to specialized contexts.

Author Contributions

J.D.: conceptualization, methodology, supervision, and writing—review and editing; F.W.: data curation, formal analysis, investigation, methodology, and writing—original draft preparation; J.Q.: supervision and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China’s major project ‘Privacy Data Protection and Medical Risk Decision Making in the Environment of Medical Network’, grant number 72293583, and the Ministry of Education of China’s Philosophy and Social Science Research Major Project ‘Risk Governance of Generative Artificial Intelligence Systems and Capacity Building’, grant number 24JZD040.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available from the corresponding author upon reasonable request.

Acknowledgments

We are particularly grateful to our supervisors for their invaluable guidance and advice during the course of the study, whose expertise and experience provided a clear guide for the direction of our research. We sincerely thank the reviewers for their constructive comments, which helped us to further improve our research.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The part of the questionnaire design.
VariablesIndicatorsMeasurement ItemsSource
Perceived
Usefulness
PU1GAI effectively integrates the strengths of humans and GAI, providing real-time feedback that enhances the quality of collaborative decision-making.[23,30]
PU2GAI significantly improves the speed and efficiency of decision-making.
PU3The intelligent analysis and multimodal support offered by GAI enable me to make optimal decisions.
Perceived Ease
of Use
PEU1When learning to use GAI, I am able to quickly adapt to its interaction methods and personalized features.[23,30]
PEU2Using GAI for collaborative decision-making is straightforward for me.
PEU3When using GAI for decision support, I can readily find solutions or obtain assistance if issues arise.
AttitudeATT1I consider the use of GAI in collaborative decision-making to be necessary.[32]
ATT2I consider the use of GAI in collaborative decision-making to be wise.
ATT3I consider the use of GAI in collaborative decision-making to be worthwhile.
ATT4I believe that GAI performs exceptionally well in collaborative decision-making.
Human-GAI trustHAT1I trust GAI to safeguard my personal information (e.g., personal data, preferences, decision history).[25,40]
HAT2I believe that the information resources provided by GAI are accurate, reliable, and thoroughly verified, thereby supporting better decision-making.
HAT3I believe that GAI’s decisions comply with ethical standards, laws, and regulations, and that it can explain the rationale for its decisions.
HAT4I believe that GAI’s decision-making process is transparent and that its outputs are explainable.
Task-Technology FitTTF1I believe that GAI can adjust its outputs in real time based on my feedback to meet my decision-making needs.[26]
TTF2I believe that the information provided by GAI effectively supports the completion of my decision-making tasks.
TTF3I believe that the information and recommendations from GAI are highly aligned with, and practical for, my decision-making objectives.
Collaborative
efficacy
CE1I believe that collaborating with GAI facilitates efficient completion of decision-making tasks, and that GAI can continuously optimize its recommendations in real time based on my feedback.[50,51]
CE2I believe that GAI can enhance decision-making quality through active learning, thereby significantly shortening the decision-making cycle.
CE3I believe that GAI reduces my information-processing burden in complex decision-making tasks.
CE4I believe that collaborating with GAI strengthens my judgment and decision-making abilities, particularly in dynamic and complex decision-making environments.
Willingness to
Adopt GAI for
Collaborative
Decision-making
WACD1I am willing to collaborate with GAI in the decision-making process.[25,32]
WACDI2I intend to adopt GAI for collaborative decision-making in future complex decision-making tasks.
WACD3I look forward to jointly optimizing decision-making outcomes through the use of GAI.

References

  1. Wu, J.; Gan, W.; Chen, Z.; Wan, S.; Lin, H. AI-Generated Content (AIGC): A Survey. arXiv 2023, arXiv:2304.06632. [Google Scholar] [CrossRef]
  2. Aickelin, U.; Maadi, M.; Khorshidi, H.A. Expert–Machine Collaborative Decision Making: We Need Healthy Competition. IEEE Intell. Syst. 2022, 37, 28–31. [Google Scholar] [CrossRef]
  3. Hao, X.; Demir, E.; Eyers, D. Exploring Collaborative Decision-Making: A Quasi-Experimental Study of Human and Generative AI Interaction. Technol. Soc. 2024, 78, 102662. [Google Scholar]
  4. Rahwan, I.; Cebrian, M.; Obradovich, N.; Bongard, J.; Bonnefon, J.-F.; Breazeal, C.; Crandall, J.W.; Christakis, N.A.; Couzin, I.D.; Jackson, M.O.; et al. Machine Behaviour. Nature 2019, 568, 477–486. [Google Scholar] [CrossRef]
  5. Wang, F.-Y.; Yang, J.; Wang, X.; Li, J.; Han, Q.-L. Chat with ChatGPT on Industry 5.0: Learning and Decision-Making for Intelligent Industries. IEEECAA J. Autom. Sin. 2023, 10, 831–834. [Google Scholar] [CrossRef]
  6. Haesevoets, T. Human-Machine Collaboration in Managerial Decision Making. Comput. Hum. Behav. 2021, 119, 106730. [Google Scholar] [CrossRef]
  7. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. “So What If ChatGPT Wrote It?” Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  8. Yu, T.; Tian, Y.; Chen, Y.; Huang, Y.; Pan, Y.; Jang, W. How Do Ethical Factors Affect User Trust and Adoption Intentions of AI-Generated Content Tools? Evidence from a Risk-Trust Perspective. Systems 2025, 13, 461. [Google Scholar] [CrossRef]
  9. Coeckelbergh, M. Narrative Responsibility and Artificial Intelligence. AI Soc. 2023, 38, 2437–2450. [Google Scholar] [CrossRef]
  10. Puerta-Beldarrain, M.; Gómez-Carmona, O.; Sánchez-Corcuera, R.; Casado-Mansilla, D.; López-de-Ipiña, D.; Chen, L. A Multifaceted Vision of the Human-AI Collaboration: A Comprehensive Review. IEEE Access 2025, 13, 29375–29405. [Google Scholar] [CrossRef]
  11. Wamba, S.F.; Queiroz, M.M.; Trinchera, L. The Role of Artificial Intelligence-Enabled Dynamic Capability on Environmental Performance: The Mediation Effect of a Data-Driven Culture in France and the USA. Int. J. Prod. Econ. 2024, 268, 109131. [Google Scholar] [CrossRef]
  12. Banh, L.; Strobel, G. Generative Artificial Intelligence. Electron. Mark. 2023, 33, 63. [Google Scholar] [CrossRef]
  13. Silva, M.; Santos, E.; Alves, K.; Silva, H.; Pedrosa, F.; Valença, G.; Brito, K. Using generative AI for simplifying official documents in the public accounts domain. In Proceedings of the Workshop de Computação Aplicada em Governo Eletrônico (WCGE), Brasilia, Brazil, 21–25 July 2024; SBC: São Paulo, Brazil, 2024; pp. 246–253. [Google Scholar]
  14. Rao, V.M.; Hla, M.; Moor, M.; Adithan, S.; Kwak, S.; Topol, E.J.; Rajpurkar, P. Multimodal Generative AI for Medical Image Interpretation. Nature 2025, 639, 888–896. [Google Scholar] [CrossRef]
  15. Li, L.; Liu, Y.; Jin, Y.; Cheng, T.C.E.; Zhang, Q. Generative AI-Enabled Supply Chain Management: The Critical Role of Coordination and Dynamism. Int. J. Prod. Econ. 2024, 277, 109388. [Google Scholar] [CrossRef]
  16. Osborne, M.R.; Bailey, E.R. Me vs. the Machine? Subjective Evaluations of Human- and AI-Generated Advice. Sci. Rep. 2025, 15, 3980. [Google Scholar] [CrossRef]
  17. Chen, Y.; Liu, T.X.; Shan, Y.; Zhong, S. The Emergence of Economic Rationality of GPT. Proc. Natl. Acad. Sci. USA 2023, 120, e2316205120. [Google Scholar] [CrossRef] [PubMed]
  18. Binz, M.; Schulz, E. Using Cognitive Psychology to Understand GPT-3. Proc. Natl. Acad. Sci. USA 2023, 120, e2218523120. [Google Scholar] [CrossRef] [PubMed]
  19. Huang, L.; Yu, W.; Ma, W.; Zhong, W.; Feng, Z.; Wang, H.; Chen, Q.; Peng, W.; Feng, X.; Qin, B.; et al. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. ACM Trans. Inf. Syst. 2025, 43, 42:1–42:55. [Google Scholar] [CrossRef]
  20. Huang, Q.; Dong, X.; Zhang, P.; Wang, B.; He, C.; Wang, J.; Lin, D.; Zhang, W.; Yu, N. OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via over-Trust Penalty and Retrospection-Allocation. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 13418–13427. [Google Scholar]
  21. Zhang, Y.; Gosline, R. Human Favoritism, Not AI Aversion: People’s Perceptions (and Bias) toward Generative AI, Human Experts, and Human–GAI Collaboration in Persuasive Content Generation. Judgm. Decis. Mak. 2023, 18, e41. [Google Scholar] [CrossRef]
  22. Celiktutan, B.; Klesse, A.-K.; Tuk, M.A. Acceptability Lies in the Eye of the Beholder: Self-Other Biases in GenAI Collaborations. Int. J. Res. Mark. 2024, 41, 496–512. [Google Scholar] [CrossRef]
  23. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  24. Goodhue, D.L.; Thompson, R.L. Task-Technology Fit and Individual Performance. MIS Q. 1995, 19, 213–236. [Google Scholar] [CrossRef]
  25. Shata, A.; Hartley, K. Artificial Intelligence and Communication Technologies in Academia: Faculty Perceptions and the Adoption of Generative AI. Int. J. Educ. Technol. High. Educ. 2025, 22, 14. [Google Scholar] [CrossRef]
  26. Al-Emran, M.; Al-Sharafi, M.A.; Foroughi, B.; Al-Qaysi, N.; Mansoor, D.; Beheshti, A.; Ali, N. Evaluating the Influence of Generative AI on Students’ Academic Performance through the Lenses of TPB and TTF Using a Hybrid SEM-ANN Approach. Educ. Inf. Technol. 2025, 30, 17557–17587. [Google Scholar] [CrossRef]
  27. Zhou, J.; Zhang, H. Factors Influencing University Students’ Continuance Intentions towards Self-Directed Learning Using Artificial Intelligence Tools: Insights from Structural Equation Modeling and Fuzzy-Set Qualitative Comparative Analysis. Appl. Sci. 2024, 14, 8363. [Google Scholar]
  28. Rapp, A.; Curti, L.; Boldi, A. The Human Side of Human-Chatbot Interaction: A Systematic Literature Review of Ten Years of Research on Text-Based Chatbots. Int. J. Hum.-Comput. Stud. 2021, 151, 102630. [Google Scholar] [CrossRef]
  29. Choung, H.; David, P.; Ross, A. Trust in AI and Its Role in the Acceptance of AI Technologies. Int. J. Hum.–Comput. Interact. 2023, 39, 1727–1739. [Google Scholar]
  30. Belanche, D.; Casaló, L.V.; Flavián, C. Artificial Intelligence in FinTech: Understanding Robo-Advisors Adoption among Customers. Ind. Manag. Data Syst. 2019, 119, 1411–1430. [Google Scholar] [CrossRef]
  31. Wang, Y.-J.; Wang, N.; Li, M.; Li, H.; Huang, G.Q. End-Users’ Acceptance of Intelligent Decision-Making: A Case Study in Digital Agriculture. Adv. Eng. Inform. 2024, 60, 102387. [Google Scholar] [CrossRef]
  32. Lee, J.; Kim, J.; Choi, J.Y. The Adoption of Virtual Reality Devices: The Technology Acceptance Model Integrating Enjoyment, Social Interaction, and Strength of the Social Ties. Telemat. Inform. 2019, 39, 37–48. [Google Scholar]
  33. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef]
  34. Jussupow, E.; Benbasat, I.; Heinzl, A. An Integrative Perspective on Algorithm Aversion and Appreciation in Decision-Making. MIS Q. 2024, 48, 1575–1590. [Google Scholar] [CrossRef]
  35. Ivanov, S.; Webster, C. Automated Decision-Making: Hoteliers’ Perceptions. Technol. Soc. 2024, 76, 102430. [Google Scholar] [CrossRef]
  36. Brüns, J.D.; Meißner, M. Do You Create Your Content Yourself? Using Generative Artificial Intelligence for Social Media Content Creation Diminishes Perceived Brand Authenticity. J. Retail. Consum. Serv. 2024, 79, 103790. [Google Scholar] [CrossRef]
  37. Shin, D. The Effects of Explainability and Causability on Perception, Trust, and Acceptance: Implications for Explainable AI. Int. J. Hum.-Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  38. Mårell-Olsson, E.; Bensch, S.; Hellström, T.; Alm, H.; Hyllbrant, A.; Leonardson, M.; Westberg, S. Navigating the Human–Robot Interface—Exploring Human Interactions and Perceptions with Social and Telepresence Robots. Appl. Sci. 2025, 15, 1127. [Google Scholar] [CrossRef]
  39. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  40. Keding, C.; Meissner, P. Managerial Overreliance on AI-Augmented Decision-Making Processes: How the Use of AI-Based Advisory Systems Shapes Choice Behavior in R&D Investment Decisions. Technol. Forecast. Soc. Change 2021, 171, 120970. [Google Scholar]
  41. Hyun Baek, T.; Kim, M. Is ChatGPT Scary Good? How User Motivations Affect Creepiness and Trust in Generative Artificial Intelligence. Telemat. Inform. 2023, 83, 102030. [Google Scholar] [CrossRef]
  42. Grewal, D.; Benoit, S.; Noble, S.M.; Guha, A.; Ahlbom, C.-P.; Nordfält, J. Leveraging In-Store Technology and AI: Increasing Customer and Employee Efficiency and Enhancing Their Experiences. J. Retail. 2023, 99, 487–504. [Google Scholar] [CrossRef]
  43. Benbya, H.; Strich, F.; Tamm, T. Navigating Generative Artificial Intelligence Promises and Perils for Knowledge and Creative Work. J. Assoc. Inf. Syst. 2024, 25, 23–36. [Google Scholar] [CrossRef]
  44. Liu, D.; Luo, J. College Learning from Classrooms to the Internet: Adoption of the YouTube as Supplementary Tool in COVID-19 Pandemic Environment. Educ. Urban Soc. 2022, 54, 848–870. [Google Scholar] [CrossRef]
  45. Bandura, A. Self-Efficacy Mechanism in Human Agency. Am. Psychol. 1982, 37, 122. [Google Scholar] [CrossRef]
  46. Sehgal, P.; Nambudiri, R.; Mishra, S.K. Teacher Effectiveness through Self-Efficacy, Collaboration and Principal Leadership. Int. J. Educ. Manag. 2017, 31, 505–517. [Google Scholar] [CrossRef]
  47. Tan, J.; Wu, L.; Ma, S. Collaborative Dialogue Patterns of Pair Programming and Their Impact on Programming Self-Efficacy and Coding Performance. Br. J. Educ. Technol. 2024, 55, 1060–1081. [Google Scholar] [CrossRef]
  48. Li, T.; Zhan, Z.; Ji, Y.; Li, T. Exploring Human and AI Collaboration in Inclusive STEM Teacher Training: A Synergistic Approach Based on Self-Determination Theory. Internet High. Educ. 2025, 65, 101003. [Google Scholar] [CrossRef]
  49. Zhenlei, Y.; Song, L.; Minyi, D.; Qiang, H. Assessing Knowledge Anxiety in Researchers: A Comprehensive Measurement Scale. PeerJ 2024, 12, e18478. [Google Scholar] [CrossRef]
  50. Shaw, J.D.; Zhu, J.; Duffy, M.K.; Scott, K.L.; Shih, H.-A.; Susanto, E. A Contingency Model of Conflict and Team Effectiveness. J. Appl. Psychol. 2011, 96, 391–400. [Google Scholar] [CrossRef]
  51. Shahzad, M.F.; Xu, S.; Zahid, H. Exploring the Impact of Generative AI-Based Technologies on Learning Performance through Self-Efficacy, Fairness & Ethics, Creativity, and Trust in Higher Education. Educ. Inf. Technol. 2025, 30, 3691–3716. [Google Scholar]
  52. Ragin, C.C. Redesigning Social Inquiry: Fuzzy Sets and Beyond; University of Chicao Press: Chicago, IL, USA, 2008. [Google Scholar]
  53. Xie, X.; Wang, H. How Can Open Innovation Ecosystem Modes Push Product Innovation Forward? An fsQCA Analysis. J. Bus. Res. 2020, 108, 29–41. [Google Scholar] [CrossRef]
  54. Cronbach, L.J. Coefficient Alpha and the Internal Structure of Tests. Psychometrika 1951, 16, 297–334. [Google Scholar] [CrossRef]
  55. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.-Y.; Podsakoff, N.P. Common Method Biases in Behavioral Research: A Critical Review of the Literature and Recommended Remedies. J. Appl. Psychol. 2003, 88, 879–903. [Google Scholar] [CrossRef]
  56. Huang, Z. Research on Innovation Capability of Regional Innovation System Based on Fuzzy-Set Qualitative Comparative Analysis: Evidence from China. Systems 2022, 10, 220. [Google Scholar] [CrossRef]
  57. Pappas, I.O.; Woodside, A.G. Fuzzy-Set Qualitative Comparative Analysis (fsQCA): Guidelines for Research Practice in Information Systems and Marketing. Int. J. Inf. Manag. 2021, 58, 102310. [Google Scholar] [CrossRef]
  58. Castro, F.G.; Kellison, J.G.; Boyd, S.J.; Kopak, A. A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses. J. Mix. Methods Res. 2010, 4, 342–360. [Google Scholar] [CrossRef]
  59. Trist, E.L.; Bamforth, K.W. Some Social and Psychological Consequences of the Longwall Method of Coal-Getting. Hum. Relat. 1951, 4, 3–38. [Google Scholar]
Figure 1. The conceptual model of this study.
Figure 1. The conceptual model of this study.
Applsci 15 10322 g001
Figure 2. Path coefficients and significance levels. Notes: *** p < 0.001. ** p < 0.01. * p < 0.05.
Figure 2. Path coefficients and significance levels. Notes: *** p < 0.001. ** p < 0.01. * p < 0.05.
Applsci 15 10322 g002
Table 1. Results of reliability and validity analyses.
Table 1. Results of reliability and validity analyses.
VariablesIndicatorsFactor LoadingAlphaCRAVE
Perceived
Usefulness
PU10.7870.8420.8420.640
PU20.815
PU30.797
Perceived Ease
of Use
PEU10.8060.8520.8530.658
PEU20.825
PEU30.803
AttitudeATT10.8340.8930.8930.677
ATT20.82
ATT30.817
ATT40.819
Human–GAI TrustHAT10.8080.8780.8780.644
HAT20.788
HAT30.794
HAT40.819
Task–Technology FitTTF10.830.8420.8420.640
TTF20.789
TTF30.78
Collaborative
Efficacy
CE10.7910.8790.8790.645
CE20.803
CE30.799
CE40.819
Willingness to
Adopt GAI
for Collaborative
Decision-Making
WACD10.8040.8440.8440.644
WACD20.826
WACD30.777
Note: Indicators such as PU1 correspond to questionnaire items. Please see Appendix A for detailed descriptions.
Table 2. Discriminant validity.
Table 2. Discriminant validity.
PUPEUATTHATTTFCEWACD
PU0.800
PEU0.6590.811
ATT0.5010.4880.823
HAT0.3350.2950.5830.802
TTF0.4010.3570.6230.4660.800
CE0.5010.4480.3670.4030.3030.803
WACD0.6410.5760.5310.5360.5060.6270.803
Table 3. Theoretical model fit index of this study.
Table 3. Theoretical model fit index of this study.
Fitting Indexχ2/dfGFICFITLIIFIRMSEA
Recommended value<3>0.90>0.90>0.90>0.90<0.08
Actual value2.1490.9090.9430.9330.9440.049
Table 4. Calibration values.
Table 4. Calibration values.
VariablesFull Membership
Threshold (95%)
Crossover (50%)Full
Non-Membership
Threshold (5%)
PU4.674.331.67
PEU4.674.001.67
ATT4.754.001.50
HAT4.754.001.50
TTF4.674.001.67
CE4.613.751.50
WACD4.674.001.67
Table 5. Analysis of necessary conditions for the conditions and outcomes.
Table 5. Analysis of necessary conditions for the conditions and outcomes.
VariablesConsistency CoverageConsistency Coverage
PU10.752320.82156
~PU10.5627010.655996
PEU10.7766090.765062
~PEU10.499850.659075
ATT10.7424340.773669
~ATT10.5541450.680871
HAT10.7569190.770243
~HAT10.5532770.69964
TTF10.7700150.765279
~TTF10.5249310.684112
CE10.7982440.797816
~CE10.5015530.648865
Note: ~, absence. Indicators such as PU1 correspond to questionnaire items. Please see Appendix A for detailed descriptions.
Table 6. Configurations that lead to high continuance intention towards autonomous learning.
Table 6. Configurations that lead to high continuance intention towards autonomous learning.
Causal
Conditions
S1S2S3S4
PU
PEU
ATT
HAT
TTF
CE
Consistency0.9300.9580.9490.954
Raw coverage0.5570.3800.5170.514
Unique coverage0.0420.0120.0440.019
Overall consistency0.918
Overall coverage0.650
Note: ⬤ represents the presence of a core casual condition; ⊗ represents the absence of a peripheral casual condition; blank spaces indicate that a condition may be either present or absent.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, J.; Wu, F.; Qi, J. Research on Influencing Factors of Users’ Willingness to Adopt GAI for Collaborative Decision-Making in Generative Artificial Intelligence Context. Appl. Sci. 2025, 15, 10322. https://doi.org/10.3390/app151910322

AMA Style

Deng J, Wu F, Qi J. Research on Influencing Factors of Users’ Willingness to Adopt GAI for Collaborative Decision-Making in Generative Artificial Intelligence Context. Applied Sciences. 2025; 15(19):10322. https://doi.org/10.3390/app151910322

Chicago/Turabian Style

Deng, Jiangao, Feifei Wu, and Jiayin Qi. 2025. "Research on Influencing Factors of Users’ Willingness to Adopt GAI for Collaborative Decision-Making in Generative Artificial Intelligence Context" Applied Sciences 15, no. 19: 10322. https://doi.org/10.3390/app151910322

APA Style

Deng, J., Wu, F., & Qi, J. (2025). Research on Influencing Factors of Users’ Willingness to Adopt GAI for Collaborative Decision-Making in Generative Artificial Intelligence Context. Applied Sciences, 15(19), 10322. https://doi.org/10.3390/app151910322

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop