How Do Consumers Trust and Accept AI Agents? An Extended Theoretical Framework and Empirical Evidence
Abstract
:1. Introduction
2. Theoretical Background
2.1. Trust Model Based on HSM
2.2. Components of Trust
2.3. Drivers of Trust
2.4. Outcomes of Trust
3. Materials and Methods
3.1. Data Collection
3.2. Instrument Development
“As a key step in this survey, please read the following text carefully before you answer the questions.
The definition of AI agent: AI agents are entities based on AI technology, which can be physical (such as a robot) or virtual (such as a software program). They have the ability to perceive information from the environment, make autonomous decisions, and carry out actions to achieve specific goals.
The development trend of AI agents: With the high maturity of the technology, AI agents are accelerating to the ground. In the next five years, they will be widely used in education, healthcare, work, and travel, which can help you handle almost any matter. As Bill Gates said, AI agents will revolutionize human lifestyle in the next few years.
Scenario imagination: Please imagine that in the near future, AI agents will be deeply integrated into your daily life and work as intelligent partners. Please rate your views on AI agents based on a wide range of future scenarios.”
4. Data Analysis and Results
4.1. Sample Characteristics
4.2. Measurement Model
4.3. Structural Model and Hypothesis Test
5. Discussion and Implications
5.1. Main Findings
5.2. Theoretical and Practical Implications
5.3. Limitations and Future Research
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. The Survey Questionnaire Items
Construct | Scale Items |
Perceived Pleasure (PP) | PP1: Interacting with AI agents will be very fun. PP2: The actual process of using AI agents will be enjoyable. PP3: Using AI agents will be pleasant. |
Anthropomorphism (AN) | AN1: AI agents will experience emotions. AN2: AI agents will have consciousness. AN3: AI agents will have minds of their own. AN4: AI agents will have personalities. |
Perceived Benefit (PB) | PB1: AI agents will provide convenience for daily life. PB2: AI agents will improve work efficiency and productivity. PB3: AI agents will promote economic development. PB4: AI agents will provide personalized services to better meet individual needs. |
Perceived Knowledge (PK) | PK1: I know AI agents very well. PK2: Compared to most people, I have a better understanding of AI agents. PK3: Among the people I know, I can be regarded as an “expert” in the field of AI agents. |
Cognitive Trust (CT) | CT1: AI agents will be technologically trustworthy. CT2: The overall ability and performance of AI agents will be reliable. CT3: AI agents will have rich and powerful domain knowledge. CT4: AI agents will make accurate and wise decisions. |
Affective Trust (AT) | AT1: AI agents will closely follow and understand my needs in future interactions. AT2: AI agents will provide timely assistance when I need help. AT3: AI agents that help me with tasks will make me feel comfortable. AT4: AI agents will demonstrate understanding and resonance with my emotional state. |
Overall Trust (OT) | OT1: AI agents will be reliable. OT2: AI agents will be trustworthy. OT3: Overall, I can trust AI agents in the future. |
General Acceptance (GA) | GA1: I will use AI agents in the future. GA2: I will pay for AI agents in the future. GA3: I will recommend AI agents to my family and friends. GA4: Please rate your overall attitude toward future AI agents. GA5: Please indicate your acceptability level of future AI agents. |
Ethical Expectation (EE) | EE1: I expect AI agents to provide me with sufficient information to explain their operational principle when performing tasks in the future. EE2: I expect explanations from AI agents to be clear and easy to understand in the future. EE3: I expect AI agents to fully demonstrate their action to me, ensuring transparency in the future. EE4: I expect AI agents to enable me to clearly understand how they make decisions in the future. EE5: I expect AI agents to follow the moral and ethical standards of human society in the future. EE6: I expect AI agents to be fair, such as to ensure that every user receives support in the future. EE7: I expect AI agents to be inclusive, such as avoiding gender discrimination in the future. EE8: I expect AI agents to effectively protect my personal privacy and data security in the future. |
Note: All items were adapted from established studies and refined based on the findings from the pre-experiment. |
References
- Abadie, A., Chowdhury, S., & Mangla, S. K. (2024). A shared journey: Experiential perspective and empirical evidence of virtual social robot ChatGPT’s priori acceptance. Technological Forecasting and Social Change, 201, 123202. [Google Scholar] [CrossRef]
- Ali, J., Amjad, U. A., Ansari, W. I., & Hafeez, F. (2024). Artificial intelligence system-based chatbot as a hotel agent. Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering), 17(3), 316–325. [Google Scholar]
- Ananthan, B., Sudhan, P., & Sukumaran, R. (2023). English facilitators’ hesitation to adopt AI-assisted speaking assessment in higher education. Boletin de Literatura Oral-The Literary Journal, 10(1), 1324–1329. [Google Scholar]
- Badghish, S., Shaik, A. S., Sahore, N., Srivastava, S., & Masood, A. (2024). Can transactional use of AI-controlled voice assistants for service delivery pickup pace in the near future? A social learning theory (SLT) perspective. Technological Forecasting and Social Change, 198, 122972. [Google Scholar] [CrossRef]
- Bagozzi, R. P., & Yi, Y. (1988). On the evaluation of structural equation models. Journal of the Academy of Marketing Science, 16, 74–94. [Google Scholar] [CrossRef]
- Bao, L., Krause, N. M., Calice, M. N., Scheufele, D. A., Wirz, C. D., Brossard, D., Newman, T. P., & Xenos, M. A. (2022). Whose AI? How different consumers think about AI and its social impacts. Computers in Human Behavior, 130, 107182. [Google Scholar] [CrossRef]
- Cabiddu, F., Moi, L., Patriotta, G., & Allen, D. G. (2022). Why do users trust algorithms? A review and conceptualization of initial trust and trust over time. European Management Journal, 40(5), 685–706. [Google Scholar] [CrossRef]
- Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. [Google Scholar] [CrossRef]
- Celik, I. (2023). Towards intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Computers in Human Behavior, 138, 107468. [Google Scholar] [CrossRef]
- Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39(5), 752–766. [Google Scholar] [CrossRef]
- Chakraborty, D., Kar, A. K., Patre, S., & Gupta, S. (2024). Enhancing trust in online grocery shopping through generative AI chatbots. Journal of Business Research, 180, 114737. [Google Scholar] [CrossRef]
- Chen, Q. Q., & Park, H. J. (2021). How anthropomorphism affects trust in intelligent personal assistants. Industrial Management & Data Systems, 121(12), 2722–2737. [Google Scholar]
- Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Information Processing & Management, 59(3), 102940. [Google Scholar]
- Chi, N. T. K., & Hoang Vu, N. (2023). Investigating the customer trust in artificial intelligence: The role of anthropomorphism, empathy response, and interaction. CAAI Transactions on Intelligence Technology, 8(1), 260–273. [Google Scholar] [CrossRef]
- Choi, S., Jang, Y., & Kim, H. (2023). Influence of pedagogical beliefs and perceived trust on teachers’ acceptance of educational artificial intelligence tools. International Journal of Human–Computer Interaction, 39(4), 910–922. [Google Scholar] [CrossRef]
- Choung, H., David, P., & Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), 1727–1739. [Google Scholar] [CrossRef]
- Delgosha, M. S., & Hajiheydari, N. (2021). How human users engage with consumer robots? A dual model of psychological ownership and trust to explain post-adoption behaviours. Computers in Human Behavior, 117, 106660. [Google Scholar] [CrossRef]
- Duffy, A. (2017). Trusting me, trusting you: Evaluating three forms of trust on an information-rich consumer review web-site. Journal of Consumer Behaviour, 16(3), 212–220. [Google Scholar] [CrossRef]
- Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. [Google Scholar] [CrossRef]
- Fu, Z., Zhao, T. Z., & Finn, C. (2024). Mobile aloha: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. arXiv, arXiv:2401.02117. [Google Scholar]
- Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. [Google Scholar] [CrossRef]
- Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157–169. [Google Scholar] [CrossRef]
- Hadi, M. U., Qureshi, R., Shah, A., Irfan, M., Zafar, A., Shaikh, M. B., Akhtar, N., Hassan, S. Z., Shoman, M., Wu, J., Mirjalili, S., & Mirjalili, S. (2023). A survey on large language models: Applications, challenges, limitations, and practical usage. Authorea Preprints. Available online: https://techrxiv.figshare.com/articles/preprint/A_Survey_on_Large_Language_Models_Applications_Challenges_Limitations_and_Practical_Usage/23589741/1/files/41501037.pdf (accessed on 28 February 2025).
- Hair, J. F., Jr., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2016). A primer on partial least squares structural equation modeling (PLS-SEM). Sage Consumerations. [Google Scholar]
- Hair, J. F., Jr., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). An introduction to structural equation modeling. In Partial least squares structural equation modeling (PLS-SEM) using R: A workbook (pp. 1–29). Springer. [Google Scholar]
- Hair, J. F., Jr., Sarstedt, M., Hopkins, L., & Kuppelwieser, V. G. (2014). Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research. European Business Review, 26, 106–121. [Google Scholar] [CrossRef]
- Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2–24. [Google Scholar] [CrossRef]
- Hayawi, K., & Shahriar, S. (2024). AI agents from copilots to coworkers: Historical context, challenges, limitations, implications, and practical guidelines. Available online: https://www.preprints.org/manuscript/202404.0709/v1 (accessed on 28 February 2025).
- Henseler, J., Ringle, C. M., & Sinkovics, R. R. (2009). The use of partial least squares path modeling in international marketing. Emerald Group Publishing Limited. [Google Scholar]
- Hohenberger, C., Sprrle, M., & Welpe, I. M. (2017). Not fearless, but self-enhanced: The effects of anxiety on the willingness to use autonomous cars depend on individual levels of self-enhancement. Technological Forecasting and Social Change, 116, 40–52. [Google Scholar] [CrossRef]
- Hsieh, P.-J. (2023). Determinants of physicians’ intention to use AI-assisted diagnosis: An integrated readiness perspective. Computers in Human Behavior, 147, 107868. [Google Scholar] [CrossRef]
- Hu, L. t., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. [Google Scholar] [CrossRef]
- Huo, W., Zheng, G., Yan, J., Sun, L., & Han, L. (2022). Interacting with medical artificial intelligence: Integrating self-responsibility attribution, human–computer trust, and personality. Computers in Human Behavior, 132, 107253. [Google Scholar] [CrossRef]
- Inie, N., Druga, S., Zukerman, P., & Bender, E. M. (2024, June 3–6). From “AI” to probabilistic automation: How does anthropomorphization of technical systems descriptions influence trust? 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 2322–2347), Rio de Janeiro, Brazil. [Google Scholar]
- Ipsos MORI. (2024). Global attitudes on AI agents: From tools to teammates. Ipsos MORI. [Google Scholar]
- Jang, W., Kim, S., Chun, J. W., Jung, A.-R., & Kim, H. (2023). Role of recommendation sizes and travel involvement in evaluating travel destination recommendation services: Comparison between artificial intelligence and travel experts. Journal of Hospitality and Tourism Technology, 14(3), 401–415. [Google Scholar] [CrossRef]
- John-Mathews, J.-M. (2022). Some critical and ethical perspectives on the empirical turn of AI interpretability. Technological Forecasting and Social Change, 174, 121209. [Google Scholar] [CrossRef]
- Kadhim, M. A., Alam, M. A., & Kaur, H. (2013, December 21–23). Design and implementation of intelligent agent and diagnosis domain tool for rule-based expert system. 2013 International Conference on Machine Intelligence and Research Advancement (pp. 619–622), Katra, India. [Google Scholar]
- Kaplan, A. D., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2023). Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2), 337–359. [Google Scholar] [CrossRef] [PubMed]
- Kim, H.-Y., & McGill, A. L. (2018). Minions for the rich? Financial status changes how consumers see products with anthropomorphic features. Journal of Consumer Research, 45(2), 429–450. [Google Scholar] [CrossRef]
- Kim, J. (2020). The influence of perceived costs and perceived benefits on AI-driven interactive recommendation agent value. Journal of Global Scholars of Marketing Science, 30(3), 319–333. [Google Scholar] [CrossRef]
- Kim, J., & Im, I. (2023). Anthropomorphic response: Understanding interactions between humans and artificial intelligence agents. Computers in Human Behavior, 139, 107512. [Google Scholar] [CrossRef]
- Knight, A. J. (2005). Differential effects of perceived and objective knowledge measures on perceptions of biotechnology. AgBioForum, 8(4), 221–227. [Google Scholar]
- Kohl, C., Knigge, M., Baader, G., Bhm, M., & Krcmar, H. (2018). Anticipating acceptance of emerging technologies using twitter: The case of self-driving cars. Journal of Business Economics, 88(5), 617–642. [Google Scholar] [CrossRef]
- Komiak, S. Y., & Benbasat, I. (2006). The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly, 30, 941–960. [Google Scholar] [CrossRef]
- Kyung, N., & Kwon, H. E. (2022). Rationally trust, but emotionally? The roles of cognitive and affective trust in laypeople’s acceptance of AI for preventive care operations. Production and Operations Management, 1–20. [Google Scholar] [CrossRef]
- Lahno, B. (2001). On the emotional character of trust. Ethical Theory and Moral Practice, 4, 171–189. [Google Scholar] [CrossRef]
- Lahno, B. (2020). Trust and emotion. In The Routledge handbook of trust and philosophy (pp. 147–159). Routledge. [Google Scholar]
- Lee, J.-g., & Lee, K. M. (2022). Polite speech strategies and their impact on drivers’ trust in autonomous vehicles. Computers in Human Behavior, 127, 107015. [Google Scholar] [CrossRef]
- Lee, J. I., Dirks, K. T., & Campagna, R. L. (2023). At the heart of trust: Understanding the integral relationship between emotion and trust. Group & Organization Management, 48(2), 546–580. [Google Scholar]
- Lewis, J. D., & Weigert, A. (1985). Trust as a social reality. Social Forces, 63(4), 967–985. [Google Scholar] [CrossRef]
- Liu, H., Yang, R., Wang, L., & Liu, P. (2019). Evaluating initial consumer acceptance of highly and fully autonomous vehicles. International Journal of Human–Computer Interaction, 35(11), 919–931. [Google Scholar] [CrossRef]
- Liu, K., & Tao, D. (2022). The roles of trust, personalization, loss of privacy, and anthropomorphism in consumer acceptance of smart healthcare services. Computers in Human Behavior, 127, 107026. [Google Scholar] [CrossRef]
- Liu, Y. L., Yan, W., Hu, B., Li, Z., & Lai, Y. L. (2022). Effects of personalization and source expertise on users’ health beliefs and usage intention toward health chat-bots: Evidence from an online experiment. Digital Health, 8, 20552076221129718. [Google Scholar] [CrossRef]
- Lowry, P. B., & Gaskin, J. (2014). Partial least squares (PLS) structural equation modeling (SEM) for building and testing behavioral causal theory: When to choose it and how to use it. IEEE Transactions on Professional Communication, 57(2), 123–146. [Google Scholar] [CrossRef]
- Maes, P. (1990). Situated agents can have goals. Robotics and Autonomous Systems, 6(1–2), 49–70. [Google Scholar] [CrossRef]
- Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734. [Google Scholar] [CrossRef]
- McAllister, D. J. (1995). Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38(1), 24–59. [Google Scholar] [CrossRef]
- McKinsey Global Institute. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey Global Institute. [Google Scholar]
- McKnight, D. H., & Chervany, N. L. (2001). What trust means in e-commerce customer relationships: An interdisciplinary conceptual typology. International Journal of Electronic Commerce, 6(2), 35–59. [Google Scholar] [CrossRef]
- Morgan, R. M., & Hunt, S. D. (1994). The commitment-trust theory of relationship marketing. Journal of Marketing, 58(3), 20–38. [Google Scholar] [CrossRef]
- Müller, J. P., & Pischel, M. (1994, August 8–12). Modelling interacting agents in dynamic environments. 11th European Conference on Artificial Intelligence (pp. 709–713), Amsterdam, The Netherlands. [Google Scholar]
- Nazir, S., Khadim, S., Asadullah, M. A., & Syed, N. (2023). Exploring the influence of artificial intelligence technology on consumer repurchase intention: The mediation and moderation approach. Technology in Society, 72, 102190. [Google Scholar] [CrossRef]
- Nilsson, N. J. (1992). Toward agent programs with circuit semantics; Technical report. NASA.
- Oseni, A., Moustafa, N., Janicke, H., Liu, P., Tari, Z., & Vasilakos, A. (2021). Security and privacy for artificial intelligence: Opportunities and challenges. arXiv, arXiv:2102.04661. [Google Scholar]
- Patrizi, M., Šerić, M., & Vernuccio, M. (2024). Hey Google, I trust you! The consequences of brand anthropomorphism in voice-based artificial intelligence contexts. Journal of Retailing and Consumer Services, 77, 103659. [Google Scholar] [CrossRef]
- Pelau, C., Dabija, D.-C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, 106855. [Google Scholar] [CrossRef]
- Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879. [Google Scholar] [CrossRef]
- Roh, T., Park, B. I., & Xiao, S. S. (2023). Adoption of AI-enabled Robo-advisors in fintech: Simultaneous employment of UTAUT and the theory of reasoned action. Journal of Electronic Commerce Research, 24(1), 29–47. [Google Scholar]
- Ryu, Y., & Kim, S. (2015). Testing the heuristic/systematic information-processing model (HSM) on the perception of risk after the Fukushima nuclear accidents. Journal of Risk Research, 18(7), 840–859. [Google Scholar] [CrossRef]
- Said, N., Potinteu, A. E., Brich, I., Buder, J., Schumm, H., & Huff, M. (2023). An artificial intelligence perspective: How knowledge and confidence shape risk and benefit perception. Computers in Human Behavior, 149, 107855. [Google Scholar] [CrossRef]
- Sanbonmatsu, D. M., Strayer, D. L., Yu, Z., Biondi, F., & Cooper, J. M. (2018). Cognitive underpinnings of beliefs and confidence in beliefs about fully automated vehicles. Transportation Research Part F: Traffic Psychology and Behaviour, 55, 114–122. [Google Scholar] [CrossRef]
- Sànchez-Marrè, M. (2022). Intelligent Decision Support Systems. In Intelligent decision support systems (pp. 77–116). Springer International Publishing. [Google Scholar]
- Schumacker, R. E., & Lomax, R. G. (2004). A beginner’s guide to structural equation modeling. Psychology Press. [Google Scholar]
- Shi, S., Gong, Y., & Gursoy, D. (2021). Antecedents of trust and adoption intention toward artificially intelligent recommendation systems in travel planning: A heuristic–systematic model. Journal of Travel Research, 60(8), 1714–1734. [Google Scholar] [CrossRef]
- Shin, D. (2020). How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance. Computers in Human Behavior, 109, 106344. [Google Scholar] [CrossRef]
- Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. [Google Scholar] [CrossRef]
- Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. John Wiley. [Google Scholar]
- Sinha, N., Singh, P., Gupta, M., & Singh, P. (2020). Robotics at workplace: An integrated Twitter analytics–SEM based approach for behavioral intention to accept. International Journal of Information Management, 55, 102210. [Google Scholar] [CrossRef]
- Slovic, P., Finucane, M., Peters, E., & MacGregor, D. G. (2002). Rational actors or rational fools: Implications of the affect heuristic for behavioral economics. The Journal of Socio-Economics, 31(4), 329–342. [Google Scholar] [CrossRef]
- Slovic, P., Finucane, M. L., Peters, E., & MacGregor, D. G. (2007). The affect heuristic. European Journal of Operational Research, 177(3), 1333–1352. [Google Scholar] [CrossRef]
- Sturgis, P., & Allum, N. (2004). Science in society: Re-evaluating the deficit model of consumer attitudes. Consumer Understanding of Science, 13(1), 55–74. [Google Scholar] [CrossRef]
- Tan, H., Zhao, X., & Yang, J. (2022). Exploring the influence of anxiety, pleasure and subjective knowledge on consumer acceptance of fully autonomous vehicles. Computers in Human Behavior, 131, 107187. [Google Scholar] [CrossRef]
- Tenenhaus, M., Vinzi, V. E., Chatelin, Y.-M., & Lauro, C. (2005). PLS path modeling. Computational Statistics & Data Analysis, 48(1), 159–205. [Google Scholar]
- Urbach, N., & Ahlemann, F. (2010). Structural equation modeling in information systems research using partial least squares. Journal of Information Technology Theory and Application (JITTA), 11(2), 2. [Google Scholar]
- Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Systems Research, 11(4), 342–365. [Google Scholar] [CrossRef]
- Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36, 157–178. [Google Scholar] [CrossRef]
- Vorm, E. S., & Combs, D. J. (2022). Integrating transparency, trust, and acceptance: The intelligent systems technology acceptance model (ISTAM). International Journal of Human–Computer Interaction, 38(18–20), 1828–1845. [Google Scholar] [CrossRef]
- Wiesinger, J., Marlow, P., & Vuskovic, V. (2024). Agents. Whitepaper. Available online: https://www.rojo.me/content/files/2025/01/Whitepaper-Agents---Google.pdf (accessed on 28 February 2025).
- Wold, H. (1975). Soft modelling by latent variables: The non-linear iterative partial least squares (NIPALS) approach. Journal of Applied Probability, 12(S1), 117–142. [Google Scholar] [CrossRef]
- World Economic Forum. (2023). The future of jobs report 2023. World Economic Forum. [Google Scholar]
- Wu, M., Wang, N., & Yuen, K. F. (2023). Deep versus superficial anthropomorphism: Exploring their effects on human trust in shared autonomous vehicles. Computers in Human Behavior, 141, 107614. [Google Scholar] [CrossRef]
- Xi, Z., Chen, W., Guo, X., He, W., Ding, Y., Hong, B., Zhang, M., Wang, J., Jin, S., & Zhou, E. (2023). The rise and potential of large language model based agents: A survey. arXiv, arXiv:2309.07864. [Google Scholar] [CrossRef]
- Xiong, Y., Shi, Y., Pu, Q., & Liu, N. (2023). More trust or more risk? User acceptance of artificial intelligence virtual assistant. Human Factors and Ergonomics in Manufacturing & Service Industries, 34, 190–205. [Google Scholar]
- Xu, L., Mak, S., & Brintrup, A. (2021). Will bots take over the supply chain? Revisiting agent-based supply chain automation. International Journal of Production Economics, 241, 108279. [Google Scholar] [CrossRef]
- Xu, Y. J. (2011). Shehui diaocha yu shuju fenxi, cong liti dao fabiao [Social survey design and data analysis, from topic to publication]. Chongqing Daxue Chubanshe. [Google Scholar]
- Yang, R., & Wibowo, S. (2022). User trust in artificial intelligence: A comprehensive conceptual framework. Electronic Markets, 32(4), 2053–2077. [Google Scholar] [CrossRef]
- Zhang, G., Chong, L., Kotovsky, K., & Cagan, J. (2023). Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. Computers in Human Behavior, 139, 107536. [Google Scholar] [CrossRef]
- Zhao, X., Yang, J., & Tan, H. (2022). The effects of subjective knowledge on the acceptance of fully autonomous vehicles depend on individual levels of trust. In Proceedings of the International Conference on Human-Computer Interaction (pp. 297–308). Springer. [Google Scholar]
Demographic Characteristics | Variables | N | % |
---|---|---|---|
Gender | Female | 350 | 55.4 |
Male | 282 | 44.6 | |
Age/year | 18–29 | 350 | 55.4 |
30–39 | 217 | 34.3 | |
40–49 | 52 | 8.2 | |
50–59 | 12 | 1.9 | |
≥60 | 1 | 0.2 | |
Education | Middle school and below | 14 | 2.2 |
High school | 48 | 7.6 | |
Junior college | 154 | 24.4 | |
Undergraduate | 317 | 50.2 | |
Graduate and above | 99 | 15.7 | |
Experience in using AI-related applications | Presence | 593 | 93.8 |
Absence | 39 | 6.2 |
Construct | Item | M | SD | FL | α | CR | AVE |
---|---|---|---|---|---|---|---|
Perceived Pleasure (PP) | PP1 | 5.54 | 1.312 | 0.853 | 0.816 | 0.891 | 0.731 |
PP2 | 5.24 | 1.273 | 0.859 | ||||
PP3 | 5.19 | 1.370 | 0.852 | ||||
Anthropomorphism (AN) | AN1 | 4.92 | 1.529 | 0.840 | 0.880 | 0.918 | 0.736 |
AN2 | 4.35 | 1.677 | 0.872 | ||||
AN3 | 4.50 | 1.743 | 0.888 | ||||
AN4 | 4.84 | 1.575 | 0.829 | ||||
Perceived Knowledge (PK) | PK1 | 4.95 | 1.484 | 0.892 | 0.872 | 0.921 | 0.795 |
PK2 | 4.66 | 1.437 | 0.911 | ||||
PK3 | 4.31 | 1.724 | 0.871 | ||||
Perceived Benefit (PB) | PB1 | 5.82 | 1.288 | 0.861 | 0.831 | 0.888 | 0.665 |
PB2 | 5.84 | 1.177 | 0.837 | ||||
PB3 | 5.64 | 1.302 | 0.768 | ||||
PB4 | 5.57 | 1.216 | 0.793 | ||||
Affective Trust (AT) | AT1 | 5.18 | 1.339 | 0.823 | 0.797 | 0.868 | 0.622 |
AT2 | 5.30 | 1.271 | 0.788 | ||||
AT3 | 5.24 | 1.346 | 0.785 | ||||
AT4 | 4.77 | 1.560 | 0.757 | ||||
Cognitive Trust (CT) | CT1 | 5.56 | 1.293 | 0.837 | 0.807 | 0.874 | 0.635 |
CT2 | 5.38 | 1.201 | 0.834 | ||||
CT3 | 5.79 | 1.250 | 0.757 | ||||
CT4 | 5.17 | 1.305 | 0.754 | ||||
Overall Trust (OT) | OT1 | 5.47 | 1.149 | 0.858 | 0.824 | 0.895 | 0.740 |
OT2 | 5.26 | 1.179 | 0.857 | ||||
OT3 | 5.36 | 1.264 | 0.865 | ||||
General Acceptance (GA) | GA1 | 5.70 | 1.233 | 0.801 | 0.848 | 0.892 | 0.623 |
GA2 | 5.07 | 1.321 | 0.705 | ||||
GA3 | 5.34 | 1.334 | 0.828 | ||||
GA4 | 5.69 | 1.097 | 0.809 | ||||
GA5 | 5.72 | 1.059 | 0.799 | ||||
Ethical Expectations (EE) | EE1 | 5.69 | 1.212 | 0.744 | 0.880 | 0.907 | 0.582 |
EE2 | 5.66 | 1.166 | 0.757 | ||||
EE3 | 5.56 | 1.297 | 0.735 | ||||
EE5 | 5.78 | 1.354 | 0.782 | ||||
EE6 | 5.59 | 1.249 | 0.777 | ||||
EE7 | 5.72 | 1.243 | 0.777 | ||||
EE8 | 5.76 | 1.430 | 0.765 |
Construct | AN | AT | CT | EE | GA | OT | PB | PK | PP |
---|---|---|---|---|---|---|---|---|---|
AN | 0.858 | ||||||||
AT | 0.639 | 0.788 | |||||||
CT | 0.472 | 0.723 | 0.797 | ||||||
EE | 0.315 | 0.522 | 0.635 | 0.763 | |||||
GA | 0.475 | 0.626 | 0.640 | 0.692 | 0.789 | ||||
OT | 0.445 | 0.606 | 0.601 | 0.569 | 0.757 | 0.860 | |||
PB | 0.328 | 0.585 | 0.723 | 0.728 | 0.663 | 0.563 | 0.816 | ||
PK | 0.540 | 0.485 | 0.435 | 0.300 | 0.564 | 0.550 | 0.299 | 0.892 | |
PP | 0.539 | 0.759 | 0.702 | 0.572 | 0.676 | 0.647 | 0.682 | 0.437 | 0.855 |
Dependent Variable | Hypothesis | R2 | β | p Value | Hypothesis Supported | Q2 |
---|---|---|---|---|---|---|
OT | H1: AT → OT | 0.423 | 0.359 *** | 0.000 | Yes | 0.309 |
H2: CT → OT | 0.342 *** | 0.000 | Yes | |||
AT | H3: PP → AT | 0.650 | 0.584 *** | 0.000 | Yes | 0.400 |
H4: AN → AT | 0.325 *** | 0.000 | Yes | |||
CT | H5: PK → CT | 0.575 | 0.240 *** | 0.000 | Yes | 0.360 |
H6: PB → CT | 0.651 *** | 0.000 | Yes | |||
GA | H7: OT → GA | 0.635 | 0.543 *** | 0.000 | Yes | 0.390 |
H8: AT → GA | 0.149 ** | 0.001 | Yes | |||
H9: CT → GA | 0.205 *** | 0.000 | Yes | |||
EE | H10: GA → EE | 0.479 | 0.692 *** | 0.000 | Yes | 0.272 |
GoF = 0.613; SRMR = 0.063 |
Mediator | Path | CI | VAF (%) | Mediating Role of Trust |
---|---|---|---|---|
AT | PP → GA | [0.1017, 0.2345] | 28.95 | Partial mediation |
PP → OT | [0.1185, 0.2621] | 31.11 | Partial mediation | |
AN → GA | [0.1870, 0.2875] | 73.36 | Partial mediation | |
AN → OT | [0.2032, 0.3164] | 79.91 | Partial mediation | |
CT | PK → GA | [0.1070, 0.1840] | 37.56 | Partial mediation |
PK → OT | [0.1051, 0.1851] | 35.99 | Partial mediation | |
PB → GA | [0.1522, 0.2951] | 52.03 | Partial mediation | |
PB → OT | [0.2080, 0.3878] | 36.15 | Partial mediation | |
OT | AT → GA | [0.2512, 0.3770] | 57.18 | Partial mediation |
CT → GA | [0.2605, 0.4051] | 54.23 | Partial mediation |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, X.; You, W.; Zheng, Z.; Shi, S.; Lu, Y.; Sun, L. How Do Consumers Trust and Accept AI Agents? An Extended Theoretical Framework and Empirical Evidence. Behav. Sci. 2025, 15, 337. https://doi.org/10.3390/bs15030337
Zhao X, You W, Zheng Z, Shi S, Lu Y, Sun L. How Do Consumers Trust and Accept AI Agents? An Extended Theoretical Framework and Empirical Evidence. Behavioral Sciences. 2025; 15(3):337. https://doi.org/10.3390/bs15030337
Chicago/Turabian StyleZhao, Xue, Weitao You, Ziqing Zheng, Shuhui Shi, Yinyu Lu, and Lingyun Sun. 2025. "How Do Consumers Trust and Accept AI Agents? An Extended Theoretical Framework and Empirical Evidence" Behavioral Sciences 15, no. 3: 337. https://doi.org/10.3390/bs15030337
APA StyleZhao, X., You, W., Zheng, Z., Shi, S., Lu, Y., & Sun, L. (2025). How Do Consumers Trust and Accept AI Agents? An Extended Theoretical Framework and Empirical Evidence. Behavioral Sciences, 15(3), 337. https://doi.org/10.3390/bs15030337