Next Article in Journal
A Systematic Review of Risk Management Methodologies for Complex Organizations in Industry 4.0 and 5.0
Next Article in Special Issue
Harnessing the Power of ChatGPT for Automating Systematic Review Process: Methodology, Case Study, Limitations, and Future Directions
Previous Article in Journal
Undergraduate Teaching Audit and Evaluation Using an Extended ORESTE Method with Interval-Valued Hesitant Fuzzy Linguistic Sets
Previous Article in Special Issue
Leveraging Ethereum Platform for Development of Efficient Tractability System in Pharmaceutical Supply Chain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Trust in Human–AI Collaboration in the Context of Multiplayer Online Games

1
School of Health Sciences, Guangzhou Xinhua University, Guangzhou 510520, China
2
School of Management, Zhengzhou University, Zhengzhou 450001, China
3
School of Biomedical Engineering, Guangzhou Xinhua University, Guangzhou 510520, China
*
Author to whom correspondence should be addressed.
Systems 2023, 11(5), 217; https://doi.org/10.3390/systems11050217
Submission received: 12 February 2023 / Revised: 4 April 2023 / Accepted: 20 April 2023 / Published: 24 April 2023
(This article belongs to the Special Issue Human–AI Teaming: Synergy, Decision-Making and Interdependency)

Abstract

:
Human–AI collaboration has attracted interest from both scholars and practitioners. However, the relationships in human–AI teamwork have not been fully investigated. This study aims to research the influencing factors of trust in AI teammates and the intention to cooperate with AI teammates. We conducted an empirical study by developing a research model of human–AI collaboration. The model presents the influencing mechanisms of interactive characteristics (i.e., perceived anthropomorphism, perceived rapport, and perceived enjoyment), environmental characteristics (i.e., peer influence and facilitating conditions), and personal characteristics (i.e., self-efficacy) on trust in teammates and cooperative intention. A total of 423 valid surveys were collected to test the research model and hypothesized relationships. The results show that perceived rapport, perceived enjoyment, peer influence, facilitating conditions, and self-efficacy positively affect trust in AI teammates. Moreover, self-efficacy and trust positively relate to the intention to cooperate with AI teammates. This study contributes to the teamwork and human–AI collaboration literature by investigating different antecedents of the trust relationship and cooperative intention.

1. Introduction

The proliferation of artificial intelligence (AI) technologies has led to numerous companies investing significant resources in developing AI-related services in all walks of life. The 2022 AI index report shows that AI has become more affordable with better performance [1]. The trend that training cost becomes lower while training time becomes faster facilitates the adoption of AI technologies in the commercial area. Against this background, exponential growth in the AI area facilitates the interaction between humans and AI agents [2]. Human–AI collaboration in which AI agents work as interdependent teammates to cooperate with humans toward common goals has become an irreversible trend [3].
Human–AI collaboration is a major area of interest within the field of management because it will enable decision-makers to perform meaningful actions to advance AI technologies [1,2]. There have been studies investigating human–AI collaboration in different contexts, such as data science [4] and piloting [5]. Though a recent work has researched human–AI teaming in the context of multiplayer online games and identifies factors that influence people’s willingness to collaborate with AI teammates and their preferred features of AI teammates [6], much of the research up to now has been descriptive in nature. There has been no detailed investigation of the mechanisms that influence humans’ intentions to work with AI teammates in highly complex environments.
Trust is at the core of understanding new technology adoption [7] and virtual team collaboration [8]. Trust in the team and trust in AI teammates are influenced by the interactions between humans and AI teammates [9]. In the context of human–AI collaboration, humans and AI cooperate toward the same goal and the collaboration involves the interaction between humans and AI, personal characteristics, and environmental characteristics [10,11]. Trust may explain the mechanisms of how different characteristics (i.e., interactive characteristics between humans and AI, environmental characteristics, and personal characteristics) influence trust, and then further affect human–AI collaboration intention. This study aims to investigate the influencing mechanisms of different characteristics on people’s collaboration intention with AI teammates. Multiplayer online games were selected as the context in this investigation because the adoption of AI technologies in multiplayer online games involves a dynamic flow and complex collaborations [6]. Single-player games are played in a single-player mode that does not require the cooperation of others and does not involve teamwork. Multiplayer games are played with two or more teammates working together for the goal of the match. In teamwork, the mutual trust between teammates is an important factor affecting the willingness to teamwork. The decision-making process of working with AI players in online games is a typical representative of human–AI collaboration. AI players act as virtual agents. In the field of online games, AI players have become the opponents or teammates of many real players, so what affects the willingness of real players to choose AI teammates? Thus, this investigation presents the following research questions: (1) what factors influence trust in AI teammates and intention to cooperate with AI teammates and how? (2) how does people’s trust in AI teammates influence their intention to cooperate with AI teammates? To answer these research questions, this work reviews previous research and develops and empirically tests a research model of human–AI collaboration. Based on the scene characteristics of online game teamwork, this study involves the interactive characteristics of online game platforms, such as perceived anthropomorphism, perceived rapport, and perceived enjoyment, as well as personal characteristics and environmental characteristics. The results of this study can provide a reference for other human artificial intelligence collaborative environments.
This paper presents the following theoretical contributions. First, this investigation contributes to the online games literature by studying the adoption of AI technologies in online games. Second, this study lays the groundwork for future research into relationships in human–AI collaboration. Third, this study adds to the team collaboration literature by identifying the influencing factors of trust perceptions and intention to cooperate with AI teammates. This work will extend our understanding of users’ views on human–AI collaboration.
The rest of this paper is arranged as follows. In the next section, the theoretical background, including human–AI collaboration and trust, is discussed. In Section 3, the theoretical model and hypothesis development are presented. In Section 4 and Section 5, the research method, which involves the data collection and survey development, and data analysis are discussed. Then, Section 6 concludes the paper and discusses the theoretical contribution and the practical contribution.

2. Literature Review

2.1. Human–AI Collaboration

With the advancement of technologies, human–AI collaboration has attracted the attention of scholars and practitioners in improving teamwork [9,12]. Team collaboration can benefit from present technological team support by adding value to teams [12]. While traditional teamwork indicates two or more people working together toward the same goal, human–AI collaboration describes the interaction process of human and AI machines. Consistent with previous studies [1,3], human–AI collaboration in the current investigation means the AI machine works as interdependent teammates to collaborate with humans toward a common goal, such as solving problems, gaining insights, and creating values. Human–AI collaboration is increasingly important in computer-supported teamwork research [6].
Much of the current research recognizes the value of AI adoption in teamwork. For example, Seeber et al. [12] discuss the future design strategies for human–AI team collaboration in various areas. Schelble et al. [9] explore dimensions of ethics in human–AI collaboration and try to explore the effect of trust-repair strategies on trust and team performance. Hauptman et al. [2] propose design suggestions for the progress of adaptive autonomous teammates. Despite an increasing focus on design strategies, little research has fully investigated the importance of human feeling in human–AI collaboration. The purpose of adopting AI technologies in team collaboration is to provide better and more efficient work performance. Therefore, whether a human can trust AI as independent teammates should draw the attention of researchers. Given that trust is an important concern in teamwork, this study aims to investigate the factors that influence human trust toward AI teammates and their cooperative intention.

2.2. Trust

The significance of trust has been extensively addressed in prior studies [7,8,11,13]. Trust indicates to what extent a party is willing to be vulnerable to another party’s actions based on the positive expectations from the trustor [14]. It has been established that trust-building is essential in virtual teams [8]. Consistent with the previous definition of trust [15], we define trust in AI teammates as a human’s attitude to be willing to be vulnerable to the actions of AI teammates on the expectation that they will perform important actions for the human teammates. Recent work has established that the use of machine teammates can change the trust relationship among teammates [12]. The joining of AI teammates in a team may affect how we trust teammates, especially when we tend to accept the recommendations provided by AI machines rather than human teammates. Trust in AI teammates has similarities and differences with traditional trust relationships in team collaboration. Similar to traditional teamwork, trust in AI teammates indicates the trust perceptions generated from the interaction among teammates. However, different from traditional teamwork, trust in AI teammates is more complex because it reflects the interactions between humans and AI machines. Therefore, a trust relationship in traditional teamwork indicates interpersonal trust, while trust in AI teammates goes further by involving trust in technology.
Compared with the trust relationship among human teammates, the trust relationship between humans and AI teammates may be influenced by different factors, such as the interaction between humans and AI machines. Though trust is fundamental to determining human behavior, including the decisions about technology adoption [7,12], researchers have not treated the trust relationship between human and AI teammates in much detail. This study aims to explore the crucial role of trust in linking influencing factors of trust and human collaboration intention. We propose that the trust relationship between humans and AI teammates could be influenced by the interaction between human and AI machines. Beyond this, environmental factors, such as external conditions and the influence of peer groups, can influence human feelings in human–AI teamwork. Personal characteristics, such as self-efficacy on AI technology use, can also affect people’s experiences in human–AI cooperation.

3. Theoretical Model and Hypotheses

Behavioral reasoning theory is a broad theory explaining the motives of human behaviors [16]. Behavioral reasoning theory can explain the antecedents of a specific behavior by including the factors of adopting or resisting reasons [16]. Previous studies have used behavioral reasoning theory to explain the adoption of new technology [17]. This investigation adopts behavioral reasoning theory as a broad theory to understand the behavior of human–AI collaboration adoption in the context of online games. Trust is at the core of understanding behavioral intentions in explaining new technology adoption [7]. This study links the antecedents of AI technology use, trust, and intention to cooperate with AI teammates. The antecedents are divided into three kinds, including interactive characteristics, environmental characteristics, and personal characteristics.

3.1. The Influencing Factors of Trust

3.1.1. Interactive Characteristics

As discussed earlier, trust is influenced by the interaction between human and AI teammates. The interaction between human and AI teammates mainly involves three crucial characteristics, i.e., perceived anthropomorphism, perceived rapport, and perceived enjoyment [18,19,20]. Perceived anthropomorphism refers to the extent that humans tend to label AI teammates as actual human beings and seek emotional assistance during human–AI collaboration [21,22]. During human–AI team collaboration, people develop relationships and emotional connections with AI teammates during regular communications [23]. Existing research recognizes the critical role played by perceived anthropomorphism as an important factor in determining people’s attitudes [21]. For AI machines with higher anthropomorphism, people tend to show feelings toward them as they interact with human teammates. Therefore, people can be more willing to trust AI teammates with higher perceived anthropomorphism because the interaction between them is closer to the cooperation between a human and another human.
Consistent with the previous study, perceived rapport refers to the personal connection between human and AI teammates in this investigation [24]. Human–AI rapport is important for human–AI collaboration because the cooperation is toward the same goal. Evidence suggests that user–robot rapport is among the most important factors for customers’ hospitality experience [19]. In the context of human–AI team collaboration, the collaboration experience will benefit from a higher perceived rapport. When people build a good relationship of rapport with AI teammates, they will tend to believe that the AI teammates can be trusted in the collaboration interaction.
Perceived enjoyment indicates the extent to which the interaction with AI teammates is contemplated to be pleasurable [25]. The experience of interacting with AI teammates is quietly different from communications with humans. AI-related technology can provide enjoyment in human–AI interactions [20]. In the context of human–AI team collaboration, people who feel enjoyment in an interaction with AI machines will be more willing to build a trust relationship with AI teammates. Therefore, based on the above arguments, the following hypotheses are proposed:
H1. 
Perceived anthropomorphism has a positive effect on trust in AI teammates.
H2. 
Perceived rapport has a positive effect on trust in AI teammates.
H3. 
Perceived enjoyment has a positive effect on trust in AI teammates.

3.1.2. Environmental Characteristics

People’s feelings or perceptions about human–AI teamwork can be influenced by environmental factors, such as peer influence and facilitating conditions. Adapted from previous studies [14], peer influence indicates to what extent people’s feelings in human–AI collaboration are influenced by peers, such as family members, friends, and colleagues. In the context of online games, peer influence refers to the adoption of AI teammates from family members, friends, or colleagues. The originality of new behavior could be derived from observing and imitating others [26]. When peers show positive feelings toward a new technology, people’s feelings about that new technology will tend to be aligned with their peers. In the context of human–AI collaboration, peers’ feelings about AI teammates will positively affect people’s trust perceptions. The people whose peers have more positive feelings about AI technology use will be more likely to trust in AI teammates.
Facilitating conditions refer to the resource factors that assist the usage of AI machines in assisting human–AI collaboration [27,28]. In the context of online games, facilitating conditions involve the assistance offered by the online game platform. Previous studies have addressed the role of facilitating conditions in assisting the adoption of new technologies [29,30]. During human–AI teamwork, we hypothesize that as an important environmental factor, facilitating conditions will assist people’s use of AI machines and raise their trust perceptions in AI teammates. Based on the above arguments, we hypothesize the following hypotheses:
H4. 
Peer influence has a positive effect on trust in AI teammates.
H5. 
Facilitating conditions has a positive effect on trust in AI teammates.

3.1.3. Personal Characteristics

Self-efficacy refers to confidence in one’s own ability in using AI technology in human–AI collaboration [31]. The crucial role of self-efficacy has been widely discussed in investigating new technology adoption. For example, Rahman et al. [32] found that healthcare technology self-efficacy positively influenced people’s attitudes toward health technologies usage. Jussupow et al. [33] indicate that diagnostic self-efficacy affects sensemaking processes in using AI systems. The relationship with self-efficacy has been verified in existing research [34]. Furthermore, people with higher self-efficacy are more willing to show positive attitudes toward new technologies. In the context of human–AI collaboration, people with higher self-efficacy in using AI technology will tend to believe that AI teammates can be trusted to cooperate for the same goal and are inclined to cooperate with AI teammates. Therefore, the following hypotheses are proposed:
H6. 
Self-efficacy has a positive effect on trust in AI teammates.
H7. 
Self-efficacy has a positive effect on intention to cooperate with AI teammates.

3.2. Trust and Intention to Cooperate with AI Teammates

Trust is one of the most important factors in explaining technology-adoption behavior, including AI adoption [35]. Trust also has been widely explored in virtual team collaboration [8]. The purpose of human–AI team collaboration is designed to take advantage of AI machines and new technology to facilitate teamwork. During team collaboration, people’s trust in teammates signifies that they have intentions to rely on their teammates even if there exists uncertainty or potential loss. In the context of human–AI collaboration, people’s trust in AI teammates means that they are willing to rely on their AI teammates in accomplishing teamwork. We hypothesize that people with higher trust in AI teammates are more willing to cooperate with AI teammates. Therefore, the following hypothesis is presented:
H8. 
Trust in AI teammates has a positive effect on intention to cooperate with AI teammates.
Figure 1 presents the research model of human–AI collaboration.

4. Research Method

4.1. Data Collection

An online survey method was adopted to obtain the sample data. A research invitation that involves a link to the e-questionnaire was distributed to the target sample through social network sites such as WeChat. As a non-interventional study, all participants in this study were fully informed that the anonymity of all personal information would be assured, the research would be conducted for the purpose of academic research rather than commercial use, and their data would be used without any predictable risk. All participants were told that the completion of the online questionnaire indicates that they agreed with the consent to analyze their data in this study. All respondents were required to have a basic knowledge of the multiplayer online games, such as League of Legends, Honor of Kings, Crossfire, and World of Warcraft. We conducted a round of pilot study to test the reliability and validity. The reliability and validity of all scales were found to be acceptable. Then, a formal data collection was performed. We received 500 responses. We removed the samples that fail in attention-check questions. Finally, we received 423 valid responses for further analysis. As presented in Table 1, most of the respondents (73.3%) to the survey are aged between 20 and 30 years. The respondents comprise 55.8% males and 44.2% females. Most respondents (83.0%) have received an education of a 3- or 4-year college.

4.2. Survey Development

For better validity, we adapted established scales from previous studies in measuring all constructs. Our primary construct, trust in AI teammates, was measured using three items adapted from Holten [13] and Pavlou and Gefen [36]. Another primary construct, the intention to cooperate with AI teammates, was tested using three items from Lim [37]. Drawing on Guido and Peluso [22] and Fernandes and Oliveira [24], perceived anthropomorphism was measured by two items. Perceived rapport was tested using three items adapted from Fernandes and Oliveira [24]. Two items adapted from Agarwal and Karahanna [38] were used to measure perceived enjoyment. Peer influence was tested using three items from Herath and Rao [39] and Carlson and Zmud [40]. Three items adapted from Thompson, Higgins, Howell [41] and Van Doorn et al. [28] were used to assess facilitating conditions. Self-efficacy was measured by three items sourced from Hua et al. [31]. All these scales were measured by a 7-point Likert scale on which 1 indicates strongly disagree and 7 means strongly agree. Table 2 presents the detailed questions and item scales.

5. Data Analysis

5.1. Measurement Model

ADANCO 2.3.1, a software that is used for variance-based SEM, was employed to test the research model and hypothesized relationships. All of the variables in our research model are reflective. We tested the reflective measurement models by internal consistency, convergent validity, and discriminant validity [42]. Internal consistency was measured by composite reliability (CR) and Cronbach’s alpha. As presented in Table 3, all values of CR and Cronbach’s alpha were higher than the suggested threshold of 0.707, indicating good internal consistency reliability. Convergent validity was tested using outer loadings and average variance extracted (AVE). Table 3 presents that all outer loadings were higher than 0.7. All AVE values were above 0.5, which indicates that all variables explain more than 50% of the variance of their indicators. The values of AVE and factor loadings show that the measurement model has good convergent validity.
Discriminant validity was tested using the Fornell–Larcker criterion and cross-loadings [42]. According to the Fornell–Larcker criterion, the values of AVE should be higher than squared correlations with other variables. As presented in Table 4, the results show that all AVE values in the diagonal exceed the squared correlation with any other variable. According to the discriminant validity tested by cross-loadings, all indicators’ outer loading on their associated variables should exceed their loadings on any other variable. Table 5 presents the results of cross-loadings, indicating that discriminant validity is not a concern in this study. Table 6 shows the inter-construct correlations. All correlations were less than 0.9. The results indicate that common method bias is not a major concern in this investigation.

5.2. Structural Model and Hypothesis Testing

We performed bootstrap for inference statistics. To test the research model and hypothesized relationships, we assessed the path coefficients, the significance of path coefficients, and the coefficient of determination (R2 value). R2 indicates the goodness of fit and shows the share of variance explained by independent variables in a dependent construct. The R2 values for trust in AI teammates and intention to cooperate with AI teammates are 0.755 and 0.753 respectively. Considering the research in human–AI collaboration, the R2 values in the current study seem to be excellent.
Table 7 shows the results of hypothesized relationships. Perceived anthropomorphism has no significant effect on trust in AI teammates (β = 0.034 p > 0.05), indicating that H1 is not supported. Perceived rapport positively influences trust in AI teammates (β = 0.193 p < 0.01), and thus H2 is supported. Perceived enjoyment is positively related to trust in AI teammates (β = 0.122 p < 0.05). Thus, H3 is also supported. The results indicate that perceived rapport and enjoyment are relatively human–AI interactive characteristics in human–AI collaboration, rather than perceived anthropomorphism. This result may be explained by the fact that people can clearly recognize the difference between machines and people. The purpose of human–AI collaboration is to feel experiences that should be strongly different from traditional human-human collaboration.
In addition, peer influence positively affects trust in AI teammates (β = 0.165 p < 0.05), which indicates that H4 is supported. The result means that the use of AI technology among peers will directly influence people’s trust perceptions of AI teammates. When surrounded by peers who use AI technology, people will tend to trust their AI teammates. Facilitating conditions positively influence trust in AI teammates (β = 0.295 p < 0.001), and thus H5 is also supported. The hypothesized relationship indicates that the external conditions that facilitate AI technology use will induce people’s trust in AI teammates.
Self-efficacy is positively related to trust in AI teammates (β = 0.165 p < 0.01) and intention to cooperate with AI teammates (β = 0.227 p < 0.001), indicating that H6 and H7 are supported. The results reveal the crucial role of self-efficacy in AI technology use in determining human–AI collaboration. As a new technology used in team collaboration, people’s confidence in using AI technology will help them to trust in AI teammates and collaborate with AI teammates.
Furthermore, trust in AI teammates positively influences intention to cooperate with AI teammates (β = 0.683 p < 0.001), which indicates that H8 is statistically supported. The result signifies that trust is a determinant factor in influencing people’s collaboration intention to work with AI teammates.

6. Discussion

6.1. Summary of Findings

The main goal of this study is to determine what contributes to human–AI collaboration. For this purpose, this investigation develops and empirically tests a research model which involves the influencing mechanisms of trust and intention to cooperate with AI teammates in human–AI collaboration. First, to answer the first research question (i.e., what and how factors influence trust in AI teammates and intention to cooperate with AI team-mates?), we propose that there were three important characteristics in influencing trust in AI teammates and intention to cooperate with AI teammates, including interactive characteristics, external characteristics, and personal characteristic. By developing and assessing the research model of human–AI collaboration, this study shows that as interactive characteristics, perceived rapport and enjoyment will positively affect people’s trust in AI teammates.
As interactive characteristics, perceived enjoyment and perceived rapport positively affect trust in AI teammates. The findings are consistent with previous research that identifies the crucial role of interactions in addressing trust in human–AI teams [10] and perceived enjoyment will facilitate positive intentions [43]. For interactive characteristics, one unanticipated finding is that the relationship between perceived anthropomorphism and trust in AI teammates is not supported. This result may be explained by the fact that the interaction between human and AI teammates in multiplayer online games is mainly focused on action coordination, rather than verbal communication and other figurative expressions. There is currently a limited amount of communication that game platforms can offer between real players and their AI teammates [6]. It is important for real players whether AI teammates can understand their goals, quests, and actions so that they can act accordingly and win the match. In this process, it is less important whether the AI teammate is like a “real person” or not.
As external characteristics, peer influence and facilitating conditions positively relate to trust in AI teammates. Previous research suggests the effect of facilitating conditions and social influence on behavioral intention in acceptance of IT [44]. This study further verifies the role of peer influence and facilitating conditions in determining behavioral intentions by affecting trust in AI teammates in the context of online games. As a personal characteristic, self-efficacy positively affects trust and intention to cooperate with AI team-mates.
Second, to answer the second research question (i.e., how do people’s trust in AI teammates influence their intention to cooperate with AI teammates?), the results verify the positive relationship between trust in AI teammates and intention to cooperate with AI teammates. This finding is consistent with previous research [8,35].

6.2. Theoretical and Practical Implications

Our study presents the following contributions to the human–AI collaboration literature. First, this study has contributions to online game literature. Recent studies have focused on online game strategy and addiction [45,46], rather than the adoption of new technologies in the context of online games. The adoption of AI technologies in online games has not been fully investigated, though the employment of human–AI collaboration has drawn the attention of practitioners. This work contributes to online game literature by emphasizing the customers’ view of cooperating with AI teammates in the context of multiplayer online games.
Second, this study lays the groundwork for future research into relationships in human–AI collaboration. Previous studies have emphasized the design strategies of human–AI teamwork [2,12], while this study endeavors to understand the relationships generated during human–AI interaction. Few studies have attempted to comprehensively assess the factors that may influence the adoption and performance of human–AI collaboration. This study proposes a relatively comprehensive research model in explaining people’s trust perceptions and intention to cooperate with AI teammates.
Third, this study adds to the team collaboration literature by identifying the influencing factors of trust perceptions and intention to cooperate with AI teammates. This study establishes a quantitative framework for detecting the role of interactive characteristics, external characteristics, and personal characteristics in determining human–AI collaboration. The current understanding of human–AI collaboration is still limited. The framework and research model hypothesized in this investigation supports a theoretical process of human–AI collaboration decision. The quantitative results indicate that the human–AI collaboration decision is determined by perceived rapport and enjoyment of AI machines, the influence from peers and facilitating conditions, and users’ self-efficacy in using AI technologies. These findings will assist future research into human–AI collaboration.
This work also shows practical implications. Despite the fact that human–AI collaboration has become an irreversible trend, limited practical implications have been shared with the management of AI technologies in improving customers’ adoption and trust perceptions of AI teammates. This investigation verify the significance of several factors of trust in AI teammates that are associated with their intention to cooperate with AI teammates. Managers who recognize that perceived rapport and enjoyment of using AI machines positively influence human–AI collaboration should supervise and urge the design of AI teammates toward higher and better relationships and enjoyment experience. In addition, the managers should realize the significance of external factors and provide an instruction manual for using AI machines in teamwork. A special department should be set up to help users who encounter difficulties in human-computer interaction. Moreover, training about using AI technologies should be provided at the beginning of human–AI collaboration to enhance users’ self-efficacy. Moreover, the result shows that perceived anthropomorphism is not significant in influencing trust in human–AI teammates. Thus, more effort should be invested to improve the rapport and enjoyment during human–AI interaction, rather than the anthropomorphic design of AI machines.

6.3. Limitations and Future Directions

As for any study, this work is subject to some limitations. First, this study was conducted in the context of multiplayer online games. The results of this study should be further tested in other human–AI collaboration contexts for the generalizability. Second, similar to much behavioral research, this work investigates behavioral intention, rather than the actual intention. With the increasing adoption of AI technologies in online games, future research is encouraged to investigate the relationship between users’ behavioral intention and the actual behavior of adopting human–AI collaboration.

Author Contributions

Conceptualization, K.H. and T.H.; methodology, K.H.; software, K.H.; validation, K.H., T.H. and L.C.; formal analysis, K.H.; investigation, K.H. and L.C.; resources, K.H.; data curation, K.H.; writing—original draft preparation, K.H. and T.H.; writing—review and editing, L.C.; visualization, T.H.; supervision, L.C.; project administration, K.H.; funding acquisition, K.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Undergraduate Teaching Quality and Teaching Reform Project of Guangdong Province, with Letter from Higher Education Office of Guangdong Provincial Department of Education [2023] No. 4, the College Youth Innovation Talent Project of Guangdong Province, China, with Grant Number 2022WQNCX099, the Higher Education Research Project sponsored by Guangdong Higher Education Academy, with Grant Number 22GQN14, the Teaching and Research Project of Guangzhou Xinhua University, with Grant Number 2022J036.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, D.; Mishra, S.; Brynjolfsson, E.; Etchemendy, J.; Ganguli, D.; Grosz, B.; Lyons, T.; Manyika, J.; Niebles, J.C.; Sellitto, M.; et al. The AI Index 2022 Annual Report. AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University. March 2022. Available online: https://aiindex.stanford.edu/report/ (accessed on 10 March 2022).
  2. Hauptman, A.I.; Schelble, B.G.; McNeese, N.J.; Madathil, K.C. Adapt and overcome: Perceptions of adaptive autonomous agents for human-AI teaming. Comput. Hum. Behav. 2023, 138, 107451. [Google Scholar] [CrossRef]
  3. McNeese, N.J.; Demir, M.; Cooke, N.J.; Myers, C. Teaming with a synthetic teammate: Insights into human-autonomy teaming. Hum. Factors 2018, 60, 262–273. [Google Scholar] [CrossRef]
  4. Wang, D.; Weisz, J.D.; Muller, M.; Ram, P.; Geyer, W.; Dugan, C.; Tausczik, Y.; Samulowitz, H.; Gray, A. Human-AI collaboration in data science: Exploring data scientists’ perceptions of automated AI. In Proceedings of the ACM on Human-Computer Interaction, Glasgow, UK, 4–9 May 2019. [Google Scholar]
  5. Liu, H.; Lai, V.; Tan, C. Understanding the effect of out-of-distribution examples and interactive explanations on human-ai decision making. In Proceedings of the ACM on Human-Computer Interaction, Online, 8–13 May 2021. [Google Scholar]
  6. Zhang, R.; McNeese, N.J.; Freeman, G.; Musick, G. “An ideal human” expectations of AI teammates in human-AI teaming. In Proceedings of the ACM on Human-Computer Interaction, Online, 8–13 May 2021. [Google Scholar]
  7. Ho, S.M.; Ocasio-Velázquez, M.; Booth, C. Trust or consequences? Causal effects of perceived risk and subjective norms on cloud technology adoption. Comput. Secur. 2017, 70, 581–595. [Google Scholar] [CrossRef]
  8. Zakaria, N.; Yusof, S.A.M. Crossing cultural boundaries using the internet: Toward building a model of swift trust formation in global virtual teams. J. Int. Manag. 2020, 26, 100654. [Google Scholar] [CrossRef]
  9. Schelble, B.G.; Lopez, J.; Textor, C.; Zhang, R.; McNeese, N.J.; Pak, R.; Freeman, G. Towards ethical AI: Empirically investigating dimensions of AI ethics, trust repair, and performance in human-AI teaming. Hum. Factors 2022, 00187208221116952. [Google Scholar] [CrossRef]
  10. Moussawi, S.; Koufaris, M.; Benbunan-Fich, R. How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electron. Mark. 2021, 31, 343–364. [Google Scholar] [CrossRef]
  11. Chi, O.H.; Jia, S.; Li, Y.; Gursoy, D. Developing a formative scale to measure consumers’ trust toward interaction with artificially intelligent (AI) social robots in service delivery. Comput. Hum. Behav. 2021, 118, 106700. [Google Scholar] [CrossRef]
  12. Seeber, I.; Bittner, E.; Briggs, R.O.; De Vreede, T.; De Vreede, G.J.; Elkins, A.; Maier, R.; Merz, A.B.; Oeste-Reiß, S.; Randrup, N.; et al. Machines as teammates: A research agenda on AI in team collaboration. Inf. Manag. 2020, 57, 103174. [Google Scholar] [CrossRef]
  13. Holten, R. Trust in sharing encounters among millennials. Inf. Syst. J. 2019, 29, 1083–1119. [Google Scholar] [CrossRef]
  14. Ozdemir, S.; Zhang, S.; Gupta, S.; Bebek, G. The effects of trust and peer influence on corporate brand—Consumer relationships and consumer loyalty. J. Bus. Res. 2020, 117, 791–805. [Google Scholar] [CrossRef]
  15. Körber, M. Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018) Volume VI: Transport Ergonomics and Human Factors (TEHF), Aerospace Human Factors and Ergonomics 20; Springer International Publishing: Cham, Switzerland, 2019; pp. 13–30. [Google Scholar]
  16. Mariani, M.M.; Perez-Vega, R.; Wirtz, J. AI in marketing, consumer research and psychology: A systematic literature review and research agenda. Psychol. Mark. 2022, 39, 755–776. [Google Scholar] [CrossRef]
  17. Huang, Y.; Qian, L. Understanding the potential adoption of autonomous vehicles in China: The perspective of behavioral reasoning theory. Psychol. Mark. 2021, 38, 669–690. [Google Scholar] [CrossRef]
  18. Li, M.; Suh, A. Anthropomorphism in AI-enabled technology: A literature review. Electron. Mark. 2022, 32, 2245–2275. [Google Scholar] [CrossRef]
  19. Qiu, H.; Li, M.; Shu, B.; Bai, B. Enhancing hospitality experience with service robots: The mediating role of rapport building. J. Hosp. Mark. Manag. 2020, 29, 247–268. [Google Scholar] [CrossRef]
  20. Kahn, B.E.; Inman, J.J.; Verhoef, P.C. Introduction to special issue: Consumer response to the evolving retailing landscape. J. Assoc. Consum. Res. 2018, 3, 255–259. [Google Scholar] [CrossRef]
  21. Mishra, A.; Shukla, A.; Sharma, S.K. Psychological determinants of users’ adoption and word-of-mouth recommendations of smart voice assistants. Int. J. Inf. Manag. 2022, 67, 102413. [Google Scholar] [CrossRef]
  22. Guido, G.; Peluso, A.M. Brand anthropomorphism: Conceptualization, measurement, and impact on brand personality and loyalty. J. Brand Manag. 2015, 22, 1–19. [Google Scholar] [CrossRef]
  23. Hur, J.D.; Koo, M.; Hofmann, W. When temptations come alive: How anthropomorphism undermines self-control. J. Consum. Res. 2015, 42, 340–358. [Google Scholar] [CrossRef]
  24. Fernandes, T.; Oliveira, E. Understanding consumers’ acceptance of automated technologies in service encounters: Drivers of digital voice assistants adoption. J. Bus. Res. 2021, 122, 180–191. [Google Scholar] [CrossRef]
  25. Pillai, R.; Sivathanu, B.; Dwivedi, Y.K. Shopping intention at AI-powered automated retail stores (AIPARS). J. Retail. Consum. Serv. 2020, 57, 102207. [Google Scholar] [CrossRef]
  26. Hou, T.; Hou, K.; Wang, X.; Luo, X.R. Why I give money to unknown people? An investigation of online donation and forwarding intention. Electron. Commer. Res. Appl. 2021, 47, 101055. [Google Scholar] [CrossRef]
  27. Teo, T. The impact of subjective norm and facilitating conditions on pre-service teachers’ attitude toward computer use: A structural equation modeling of an extended technology acceptance model. J. Educ. Comput. Res. 2009, 40, 89–109. [Google Scholar] [CrossRef]
  28. Van Doorn, J.; Mende, M.; Noble, S.M.; Hulland, J.; Ostrom, A.L.; Grewal, D.; Petersen, J.A. Domo arigato Mr. Roboto: Emergence of automated social presence in organizational frontlines and customers’ service experiences. J. Serv. Res. 2017, 20, 43–58. [Google Scholar] [CrossRef]
  29. Park, S.H.S.; Lee, L.; Yi, M.Y. Group-level effects of facilitating conditions on individual acceptance of information systems. Inf. Technol. Manag. 2011, 12, 315–334. [Google Scholar] [CrossRef]
  30. Peñarroja, V.; Sánchez, J.; Gamero, N.; Orengo, V.; Zornoza, A.M. The influence of organisational facilitating conditions and technology acceptance factors on the effectiveness of virtual communities of practice. Behav. Inf. Technol. 2019, 38, 845–857. [Google Scholar] [CrossRef]
  31. Hua, Y.; Cheng, X.; Hou, T.; Luo, R. Monetary rewards, intrinsic motivators, and work engagement in the IT-enabled sharing economy: A mixed-methods investigation of Internet taxi drivers. Decis. Sci. 2020, 51, 755–785. [Google Scholar] [CrossRef]
  32. Rahman, M.S.; Ko, M.; Warren, J.; Carpenter, D. Healthcare Technology Self-Efficacy (HTSE) and its influence on individual attitude: An empirical study. Comput. Hum. Behav. 2016, 58, 12–24. [Google Scholar] [CrossRef]
  33. Jussupow, E.; Spohrer, K.; Heinzl, A. Radiologists’ usage of diagnostic AI systems: The role of diagnostic self-efficacy for sensemaking from confirmation and disconfirmation. Bus. Inf. Syst. Eng. 2022, 64, 293–309. [Google Scholar] [CrossRef]
  34. Kim, Y.H.; Kim, D.J.; Hwang, Y. Exploring online transaction self-efficacy in trust building in B2C e-commerce. J. Organ. End User Comput. (JOEUC) 2009, 21, 37–59. [Google Scholar] [CrossRef]
  35. Bedué, P.; Fritzsche, A. Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. J. Enterp. Inf. Manag. 2022, 35, 530–549. [Google Scholar] [CrossRef]
  36. Pavlou, P.A.; Gefen, D. Building effective online marketplaces with institution-based trust. Inf. Syst. Res. 2004, 15, 37–59. [Google Scholar] [CrossRef]
  37. Lim, K.H.; Sia, C.L.; Lee, M.K.; Benbasat, I. Do I trust you online, and if so, will I buy? An empirical study of two trust-building strategies. J. Manag. Inf. Syst. 2006, 23, 233–266. [Google Scholar] [CrossRef]
  38. Agarwal, R.; Karahanna, E. Time flies when you’re having fun: Cognitive absorption and beliefs about information technology usage. MIS Q. 2000, 24, 665–694. [Google Scholar] [CrossRef]
  39. Herath, T.; Rao, H.R. Encouraging information security behaviors in organizations: Role of penalties, pressures and perceived effectiveness. Decis. Support Syst. 2009, 47, 154–165. [Google Scholar] [CrossRef]
  40. Carlson, J.R.; Zmud, R.W. Channel expansion theory and the experiential nature of media richness perceptions. Acad. Manag. J. 1999, 42, 153–170. [Google Scholar] [CrossRef]
  41. Thompson, R.L.; Higgins, C.A.; Howell, J.M. Personal computing: Toward a conceptual model of utilization. MIS Q. 1991, 15, 125–143. [Google Scholar] [CrossRef]
  42. Hair, J.F.; Hult, G.T.M.; Ringle, C.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage Publications: Thousand Oaks, CA, USA, 2016. [Google Scholar]
  43. Ezer, N.; Bruni, S.; Cai, Y.; Hepenstal, S.J.; Miller, C.A.; Schmorrow, D.D. Trust engineering for human-AI teams. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Seattle, WA, USA, 28 October–1 November 2019; SAGE Publications: Los Angeles, CA, USA, 2019; Volume 63, No. 1. pp. 322–326. [Google Scholar]
  44. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  45. Mishra, S.; Malhotra, G. The gamification of in-game advertising: Examining the role of psychological ownership and advertisement intrusiveness. Int. J. Inf. Manag. 2021, 61, 102245. [Google Scholar] [CrossRef]
  46. Salehi, E.; Fallahchai, R.; Griffiths, M. Online addictions among adolescents and young adults in Iran: The role of attachment styles and gender. Soc. Sci. Comput. Rev. 2022, 41, 554–572. [Google Scholar] [CrossRef]
Figure 1. The research model of human–AI collaboration.
Figure 1. The research model of human–AI collaboration.
Systems 11 00217 g001
Table 1. Demographic characteristics of respondents (n = 423).
Table 1. Demographic characteristics of respondents (n = 423).
VariablesLevelFrequencyPercentage
GenderMale23655.8%
Female18744.2%
Age<207718.2%
20–2517842.1%
26–3013231.2%
>30368.5%
EducationHigh school or below204.7%
3- or 4-year college35183.0%
Graduate school or higher5212.3%
Monthly income (RMB)<500016939.95%
5000–10,00020949.4%
10,000–20,000389.0%
>20,00071.7%
Table 2. Measurement items.
Table 2. Measurement items.
Construct ItemsCodeQuestions and Item Scales
Perceived
anthropomorphism
PA1For me, it is very important that AI teammates act like human.
PA2For me, Sometimes the AI teammate seems to have real feelings.
Perceived
rapport
PR1AI teammates will relate well to me.
PR2I think there will be a “bond” between AI teammates and myself.
PR3I think there will be a “connection” between AI teammates and myself.
Perceived
enjoyment
PE1For me, it is very important that I have fun interacting with AI teammates.
PE2For me, it is very important that I enjoy work with AI teammates.
Peer
influence
PI1It is likely that the majority of my friends would cooperate with AI teammates.
PI2My friends/colleagues/co-workers frequently cooperate with AI teammates.
PI3My friends/colleagues/co-workers have expressed to me how interesting the cooperation with AI teammates is.
Facilitating
conditions
FC1On online game platform, guidance would be available to me about the cooperation with AI teammates.
FC2On online game platform, specialized instruction concerning the AI teammates would be available to me.
FC3On online game platform, a specific employee (or group) should be available for assistance when I have difficulties in collaboration with AI teammates.
Self-efficacySE1Cooperation with AI teammates is well within the scope of my abilities.
SE2I feel I will be overqualified for the cooperation with AI teammates.
SE3I have all the technical knowledge I need to cooperation with AI teammates.
Trust in
AI teammates
TR1I usually trust the AI teammates.
TR2I believe that the AI teammates are trustworthy.
TR3I feel that AI teammates are honest.
Intention to cooperate
with AI teammates
INT1I will consider cooperating with AI teammates.
INT2I would seriously contemplate working with AI teammates.
INT3I am likely to make future cooperation with AI teammates.
Table 3. Descriptive statistics.
Table 3. Descriptive statistics.
ConstructsItemsFactor LoadingsWeightsComposite Reliability (CR)Cronbach’s Alpha (α)Average Variance Extracted (AVE)MeanS.D.
Perceived
anthropomorphism
PA10.928 ***0.558 ***0.9200.8260.8514.201.30
PA20.918 ***0.525 ***
Perceived
Rapport
PR10.909 ***0.374 ***0.9400.9050.8404.051.10
PR20.922 ***0.356 ***
PR30.919 ***0.362 ***
Perceived
enjoyment
PE10.942 ***0.520 ***0.9420.8770.8914.341.23
PE20.946 ***0.540 ***
Peer
influence
PI10.919 ***0.393 ***0.9320.8910.8214.121.16
PI20.910 ***0.357 ***
PI30.889 ***0.353 ***
Facilitating
conditions
FC10.925 ***0.365 ***0.9450.9130.8524.291.22
FC20.923 ***0.356 ***
FC30.922 ***0.362 ***
Self-efficacySE10.923 ***0.380 ***0.9410.9060.8424.211.15
SE20.926 ***0.362 ***
SE30.905 ***0.348 ***
Trust in
AI teammates
TR10.915 ***0.376 ***0.9300.8880.8174.201.10
TR20.913 ***0.366 ***
TR30.883 ***0.365 ***
Intention to cooperate with AI teammatesINT10.927 ***0.353 ***0.9500.9210.8644.241.20
INT20.930 ***0.355 ***
INT30.931 ***0.368 ***
Note: *** p < 0.001.
Table 4. Discriminant validity evaluation based on the Fornell–Larcker criterion.
Table 4. Discriminant validity evaluation based on the Fornell–Larcker criterion.
Constructs12345678
1. SE0.842
2. PA0.4410.851
3. PR0.4860.4530.840
4. PE0.5050.5460.4910.891
5. PI0.5280.4590.6070.5360.821
6. FC0.5970.4900.5230.6400.5930.852
7. TR0.5740.4660.5760.5730.6020.6580.817
8. INT0.5540.4450.5530.6220.6130.7010.7310.864
Notes: Squared correlations; AVE in the diagonal.
Table 5. Factor loadings and cross-loadings.
Table 5. Factor loadings and cross-loadings.
IndicatorsSEPAPRPEPIFCTRINT
SE10.9230.6120.6500.6560.6920.7410.7220.718
SE20.9260.6100.6230.6830.6490.6920.6970.675
SE30.9050.6060.6470.6160.6600.6930.6640.653
PA10.6410.9280.5940.6910.6380.6730.6490.634
PA20.5830.9180.6490.6720.6120.6170.6100.596
PR10.6560.6120.9090.6540.7180.6690.7140.711
PR20.6440.5940.9220.6220.7050.6480.6800.656
PR30.6170.6440.9190.6500.7190.6720.6910.676
PE10.6630.7060.6560.9420.6700.7640.7010.735
PE20.6780.6890.6670.9460.7120.7470.7280.753
PI10.6900.6420.7260.7150.9190.7360.7490.773
PI20.6680.5970.6730.6520.9100.6880.6810.693
PI30.6160.6020.7180.6200.8890.6660.6740.657
FC10.7210.6360.6970.7470.7420.9250.7570.790
FC20.6950.6350.6800.7030.6970.9230.7380.774
FC30.7250.6680.6270.7650.6930.9220.7520.754
TR10.6890.6230.7330.6970.7120.7350.9150.782
TR20.6490.6310.6880.6840.6960.7080.9130.778
TR30.7160.5960.6350.6710.6940.7560.8830.758
INT10.6700.6220.6900.7270.7280.7610.7920.927
INT20.6690.6140.6790.7240.7150.7840.7990.930
INT30.7340.6230.7040.7470.7400.7890.7930.931
Table 6. Inter-construct correlations.
Table 6. Inter-construct correlations.
Constructs12345678
1. SE1.000
2. PA0.6641.000
3. PR0.6970.6731.000
4. PE0.7100.7390.7011.000
5. PI0.7270.6780.7790.7321.000
6. FC0.7730.7000.7230.8000.7701.000
7. TR0.7570.6830.7590.7570.7760.8111.000
8. INT0.7440.6670.7440.7890.7830.8370.8551.000
Table 7. Hypothesis testing results.
Table 7. Hypothesis testing results.
RelationshipBetap-ValueSupport
H1: Perceived anthropomorphism → trust in AI teammates0.034>0.05Not Supported
H2: Perceived rapport → trust in AI teammates0.193 **<0.01Supported
H3: Perceived enjoyment → trust in AI teammates0.122 *<0.05Supported
H4: Peer influence → trust in AI teammates0.165 *<0.05Supported
H5: Facilitating conditions → trust in AI teammates0.295 ***<0.001Supported
H6: Self-efficacy → trust in AI teammates0.165 **<0.01Supported
H7: Self-efficacy → intention to cooperate with AI teammates0.227 ***<0.001Supported
H8: Trust in AI teammates → intention to cooperate with AI teammates0.683 ***<0.001Supported
Note: *** p < 0.001, ** p < 0.01, * p < 0.05.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hou, K.; Hou, T.; Cai, L. Exploring Trust in Human–AI Collaboration in the Context of Multiplayer Online Games. Systems 2023, 11, 217. https://doi.org/10.3390/systems11050217

AMA Style

Hou K, Hou T, Cai L. Exploring Trust in Human–AI Collaboration in the Context of Multiplayer Online Games. Systems. 2023; 11(5):217. https://doi.org/10.3390/systems11050217

Chicago/Turabian Style

Hou, Keke, Tingting Hou, and Lili Cai. 2023. "Exploring Trust in Human–AI Collaboration in the Context of Multiplayer Online Games" Systems 11, no. 5: 217. https://doi.org/10.3390/systems11050217

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop