Trusting Humans or Bots? Examining Trust Transfer and Algorithm Aversion in China’s E-Government Services
Abstract
1. Introduction
2. Theoretical Framework and Hypothesis Development
2.1. Government Chatbots (GCs) in Chinese E-Government Services
2.2. The Impact Caused by Different Agency Types
2.2.1. Different Agent Types and Perceived Risk (PR)
2.2.2. Different Agent Types and Trust of E-Government Services (TOE)
2.2.3. Different Agent Types and E-Government Adoption Intention (EGA)
2.3. Trust of Government (TOG) and Perceived Risk (PR)
2.4. Factors Affecting Trust of E-Government Services (TOE)
2.4.1. Perceived Risk Theory
2.4.2. Trust Transfer Theory
2.5. Trust of E-Government Services (TOE) and E-Government Adoption Intention (EGA)
2.6. Modulation Effects of Agent Types
3. Methodology
3.1. Experimental Scenario Design and Measurement Instrument Development
3.2. Data Collection
3.3. Research Ethics Review
4. Data Analysis and Results
4.1. Data Analysis
4.2. Common Method Bias
4.3. Measurement Model
4.4. Measurement Invariance
4.5. Differences Between Different Types of Agents
4.6. Structural Model and Hypothesis Testing
5. Discussion
5.1. The Impact of Agent Type on Citizens’ Perceptions of E-Government Services
5.2. The Moderating Role of Agent Type in Trust Transfer Mechanisms
5.3. Perceived Risk as a Boundary Condition for Human–GC Trust Gap
6. Conclusions
6.1. Practical Contribution
6.2. Theoretical Contributions
6.3. Research Limitations and Future Research
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Measurement Items for All Structures
Construct | Items |
Trust in e-government service (TOE) (J. V. Chen et al., 2015) |
|
| |
| |
E-government service adoption intention (EGA) (Hung et al., 2006) |
|
| |
| |
Trust in the government (TOG) (Grimmelikhuijsen, 2012) |
|
| |
| |
Perceived risk (PR) (W. Li, 2021) |
|
|
Appendix B. Scenario Description
Scenario Description: 2 × 2 Between-Subjects Design (Agent Type × Government Performance News) | |
Positive × Human | You recently read a report about the positive results achieved by the government in improving the quality of public services. For example, government officials have demonstrated a high level of integrity and professionalism in handling public affairs and actively responded to the needs of citizens. In addition, the government has strengthened training for civil servants to ensure that they provide efficient services and adhere to ethical standards. In this context, you wish to use the government’s online platform to inquire about the office hours of a certain department. The system has automatically assigned a government staff member to assist you. |
Positive × GCs | You recently read a report about the positive results achieved by the government in improving the quality of public services. For example, government officials have demonstrated a high level of integrity and professionalism in handling public affairs and actively responded to the needs of citizens. In addition, the government has strengthened training for civil servants to ensure that they provide efficient services and adhere to ethical standards. In this context, you wish to use the government’s online platform to inquire about the office hours of a certain department. The system has automatically assigned a government chatbot to assist you. |
Negative × Human | You have recently come across a report stating that certain government departments have been criticized for inefficiency and a lack of integrity in public service delivery. Some citizens have reported experiences of bureaucratic shirking and inaccurate information, resulting in delays in service processing. In this context, you wish to use the government’s online platform to inquire about the office hours of a certain department. The system has automatically assigned a government staff member to assist you. |
Negative × GCs | You have recently come across a report stating that certain government departments have been criticized for inefficiency and a lack of integrity in public service delivery. Some citizens have reported experiences of bureaucratic shirking and inaccurate information, resulting in delays in service processing In this context, you wish to use the government’s online platform to inquire about the office hours of a certain department. The system has automatically assigned a government chatbot to assist you. |
References
- Abbas, N., Følstad, A., & Bjørkli, C. A. (2023). Chatbots as part of digital government service provision—A user perspective. In A. Følstad, T. Araujo, S. Papadopoulos, E. L.-C. Law, E. Luger, M. Goodwin, & P. B. Brandtzaeg (Eds.), Chatbot research and design (Vol. 13815, pp. 66–82). Springer International Publishing. [Google Scholar]
- Abdelhalim, E., Anazodo, K. S., Gali, N., & Robson, K. (2024). A framework of diversity, equity, and inclusion safeguards for chatbots. Business Horizons, 67(5), 487–498. [Google Scholar] [CrossRef]
- Abu-Shanab, E. (2014). Antecedents of trust in e-government services: An empirical test in Jordan. Transforming Government: People, Process and Policy, 8(4), 480–499. [Google Scholar] [CrossRef]
- Adamopoulou, E., & Moussiades, L. (2020). Chatbots: History, technology, and applications. Machine Learning with Applications, 2, 100006. [Google Scholar] [CrossRef]
- Alagarsamy, S., & Mehrolia, S. (2023). Exploring chatbot trust: Antecedents and behavioural outcomes. Heliyon, 9(5), e16074. [Google Scholar] [CrossRef]
- Alghamdi, S. Y., Kaur, S., Qureshi, K. M., Almuflih, A. S., Almakayeel, N., Alsulamy, S., Qureshi, M. R. N., & Nevárez-Moorillón, G. V. (2023). Antecedents for online food delivery platform leading to continuance usage intention via E-word-of-mouth review adoption. PLoS ONE, 18(8), e0290247. [Google Scholar] [CrossRef]
- Al Shamsi, J. H., Al-Emran, M., & Shaalan, K. (2022). Understanding key drivers affecting students’ use of artificial intelligence-based voice assistants. Education and Information Technologies, 27(6), 8071–8091. [Google Scholar] [CrossRef]
- Alzahrani, L., Al-Karaghouli, W., & Weerakkody, V. (2017). Analysing the critical factors influencing trust in e-government adoption from citizens’ perspective: A systematic review and a conceptual framework. International Business Review, 26(1), 164–175. [Google Scholar] [CrossRef]
- Alzahrani, L., Al-Karaghouli, W., & Weerakkody, V. (2018). Investigating the impact of citizens’ trust toward the successful adoption of e-government: A multigroup analysis of gender, age, and internet experience. Information Systems Management, 35(2), 124–146. [Google Scholar] [CrossRef]
- Androutsopoulou, A., Karacapilidis, N., Loukis, E., & Charalabidis, Y. (2019). Transforming the communication between citizens and government through AI-guided chatbots. Government Information Quarterly, 36(2), 358–367. [Google Scholar] [CrossRef]
- Aoki, N. (2020). An experimental study of public trust in AI chatbots in the public sector. Government Information Quarterly, 37(4), 101490. [Google Scholar] [CrossRef]
- Bayram, A. B., & Shields, T. (2021). Who trusts the WHO? Heuristics and Americans’ trust in the World Health Organization during the COVID-19 pandemic. Social Science Quarterly, 102(5), 2312–2330. [Google Scholar] [CrossRef]
- Belanche, D., Casaló, L. V., Flavián, C., & Schepers, J. (2014). Trust transfer in the continued usage of public e-services. Information & Management, 51(6), 627–640. [Google Scholar] [CrossRef]
- Benitez, J., Henseler, J., Castillo, A., & Schuberth, F. (2020). How to perform and report an impactful analysis using partial least squares: Guidelines for confirmatory and explanatory IS research. Information & Management, 57(2), 103168. [Google Scholar] [CrossRef]
- Beyari, H., & Hashem, T. (2025). The role of artificial intelligence in personalizing social media marketing strategies for enhanced customer experience. Behavioral Sciences, 15(5), 700. [Google Scholar] [CrossRef]
- Bélanger, F., & Carter, L. (2008). Trust and risk in e-government adoption. The Journal of Strategic Information Systems, 17(2), 165–176. [Google Scholar] [CrossRef]
- Bonezzi, A., Ostinelli, M., & Melzner, J. (2022). The human black-box: The illusion of understanding human better than algorithmic decision-making. Journal of Experimental Psychology: General, 151(9), 2250–2258. [Google Scholar] [CrossRef] [PubMed]
- Castelo, N. (2024). Perceived corruption reduces algorithm aversion. Journal of Consumer Psychology, 34(2), 326–333. [Google Scholar] [CrossRef]
- Chen, C. (2024). How consumers respond to service failures caused by algorithmic mistakes: The role of algorithmic interpretability. Journal of Business Research, 176, 114610. [Google Scholar] [CrossRef]
- Chen, J. V., Jubilado, R. J. M., Capistrano, E. P. S., & Yen, D. C. (2015). Factors affecting online tax filing—An application of the IS success model and trust theory. Computers in Human Behavior, 43, 251–262. [Google Scholar] [CrossRef]
- Chen, T., & Gasco-Hernandez, M. (2024). Uncovering the results of AI chatbot use in the public sector: Evidence from US State Governments. Public Performance & Management Review, 1–26. [Google Scholar] [CrossRef]
- Chen, T., Li, S., Zeng, Z., Liang, Z., Chen, Y., & Guo, W. (2024). An empirical investigation of users’ switching intention to public service robots: From the perspective of PPM framework. Government Information Quarterly, 41(2), 101933. [Google Scholar] [CrossRef]
- Chen, Y.-S., & Chang, C.-H. (2012). Enhance green purchase intentions. Management Decision, 50(3), 502–520. [Google Scholar] [CrossRef]
- Choi, J.-C., & Song, C. (2020). Factors explaining why some citizens engage in e-participation, while others do not. Government Information Quarterly, 37(4), 101524. [Google Scholar] [CrossRef]
- Cortés-Cediel, M. E., Segura-Tinoco, A., Cantador, I., & Bolívar, M. P. R. (2023). Trends and challenges of e-government chatbots: Advances in exploring open government data and citizen participation content. Government Information Quarterly, 40(4), 101877. [Google Scholar] [CrossRef]
- Creemers, R. (2017). Cyber China: Upgrading propaganda, public opinion work and social management for the twenty-first century. Journal of Contemporary China, 26(103), 85–100. [Google Scholar] [CrossRef]
- Curiello, S., Iannuzzi, E., Meissner, D., & Nigro, C. (2025). Mind the gap: Unveiling the advantages and challenges of artificial intelligence in the healthcare ecosystem. European Journal of Innovation Management. ahead-of-print. [Google Scholar] [CrossRef]
- Čerka, P., Grigienė, J., & Sirbikytė, G. (2015). Liability for damages caused by artificial intelligence. Computer Law & Security Review, 31(3), 376–389. [Google Scholar] [CrossRef]
- Dang, J., & Liu, L. (2024). Extended artificial intelligence aversion: People deny humanness to artificial intelligence users. Journal of Personality and Social Psychology. [Google Scholar] [CrossRef] [PubMed]
- Dang, Q., & Li, G. (2025). Unveiling trust in AI: The interplay of antecedents, consequences, and cultural dynamics. AI and Society. [Google Scholar] [CrossRef]
- Dietvorst, B. J., & Bharti, S. (2020). People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological Science, 31(10), 1302–1314. [Google Scholar] [CrossRef] [PubMed]
- Fahnenstich, H., Rieger, T., & Roesler, E. (2024). Trusting under risk—Comparing human to AI decision support agents. Computers in Human Behavior, 153, 108107. [Google Scholar] [CrossRef]
- Fiske, S. T., Cuddy, A. J., & Glick, P. (2007). Universal dimensions of social cognition: Warmth and competence. Trends in Cognitive Sciences, 11, 77–83. [Google Scholar] [CrossRef]
- Frank, D.-A., Chrysochou, P., Mitkidis, P., Otterbring, T., & Ariely, D. (2024). Navigating uncertainty: Exploring consumer acceptance of artificial intelligence under self-threats and high-stakes decisions. Technology in Society, 79, 102732. [Google Scholar] [CrossRef]
- Frank, D.-A., Jacobsen, L. F., Søndergaard, H. A., & Otterbring, T. (2023). In companies we trust: Consumer adoption of artificial intelligence services and the role of trust in companies and AI autonomy. Information Technology & People, 36(8), 155–173. [Google Scholar] [CrossRef]
- Fuentes, R. (2011). Efficiency of travel agencies: A case study of alicante, spain. Tourism Management, 32(1), 75–87. [Google Scholar] [CrossRef]
- Gesk, T. S., & Leyer, M. (2022). Artificial intelligence in public services: When and why citizens accept its usage. Government Information Quarterly, 39(3), 101704. [Google Scholar] [CrossRef]
- Grimmelikhuijsen, S. (2012). Linking transparency, knowledge and citizen trust in government: An experiment. International Review of Administrative Sciences, 78(1), 50–73. [Google Scholar] [CrossRef]
- Grimmelikhuijsen, S., & Knies, E. (2017). Validating a scale for citizen trust in government organizations. International Review of Administrative Sciences, 83(3), 583–601. [Google Scholar] [CrossRef]
- Grzymek, V., & Puntschuh, M. (2019). What Europe knows and thinks about algorithms results of a representative survey. Bertelsmann Stiftung Eupinions February 2019. Available online: https://aei.pitt.edu/102582/ (accessed on 26 March 2025).
- Guo, Y., & Dong, P. (2024). Factors influencing user favorability of government chatbots on digital government interaction platforms across different scenarios. Journal of Theoretical and Applied Electronic Commerce Research, 19(2), 818–845. [Google Scholar] [CrossRef]
- Gupta, P., Hooda, A., Jeyaraj, A., Seddon, J. J., & Dwivedi, Y. K. (2024). Trust, risk, privacy and security in e-government use: Insights from a MASEM analysis. Information Systems Frontiers, 27, 1089–1105. [Google Scholar] [CrossRef]
- Gupta, S., Kamboj, S., & Bag, S. (2023). Role of risks in the development of responsible artificial intelligence in the digital healthcare domain. Information Systems Frontiers, 25(6), 2257–2274. [Google Scholar] [CrossRef]
- Ha, T., Kim, S., Seo, D., & Lee, S. (2020). Effects of explanation types and perceived risk on trust in autonomous vehicles. Transportation Research Part F: Traffic Psychology and Behaviour, 73, 271–280. [Google Scholar] [CrossRef]
- Haesevoets, T., Verschuere, B., & Roets, A. (2025). AI adoption in public administration: Perspectives of public sector managers and public sector non-managerial employees. Government Information Quarterly, 42(2), 102029. [Google Scholar] [CrossRef]
- Hair, J., & Alamer, A. (2022). Partial least squares structural equation modeling (PLS-SEM) in second language and education research: Guidelines using an applied example. Research Methods in Applied Linguistics, 1(3), 100027. [Google Scholar] [CrossRef]
- Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet. Journal of Marketing Theory and Practice, 19(2), 139–152. [Google Scholar] [CrossRef]
- Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2–24. [Google Scholar] [CrossRef]
- Han, G., & Yan, S. (2019). Does food safety risk perception affect the public’s trust in their government? An empirical study on a national survey in China. International Journal of Environmental Research and Public Health, 16(11), 1874. [Google Scholar] [CrossRef]
- Henseler, J., Ringle, C. M., Sarstedt, M., & Sinkovics, R. (2016). Testing measurement invariance of composites using partial least squares’ edited by R. R. Sinkovics, Ruey-Jer “Bryan” Jean And Daekwan Kim. International Marketing Review, 33(3), 405–431. [Google Scholar] [CrossRef]
- Henseler, J., & Sarstedt, M. (2013). Goodness-of-fit indices for partial least squares path modeling. Computational Statistics, 28(2), 565–580. [Google Scholar] [CrossRef]
- Hung, S.-Y., Chang, C.-M., & Yu, T.-J. (2006). Determinants of user acceptance of the e-government services: The case of online tax filing and payment system. Government Information Quarterly, 23(1), 97–122. [Google Scholar] [CrossRef]
- Imran, M., Li, J., Bano, S., & Rashid, W. (2025). Impact of democratic leadership on employee innovative behavior with mediating role of psychological safety and creative potential. Sustainability, 17(5), 1879. [Google Scholar] [CrossRef]
- Ingrams, A., Kaufmann, W., & Jacobs, D. (2022). In AI we trust? Citizen perceptions of AI in government decision making. Policy & Internet, 14(2), 390–409. [Google Scholar] [CrossRef]
- Ireland, L. (2020). Who errs? Algorithm aversion, the source of judicial error, and public support for self-help behaviors. Journal of Crime and Justice, 43(2), 174–192. [Google Scholar] [CrossRef]
- Ivanović, D., Radenković, S. D., Hanić, H., Simović, V., & Kojić, M. (2025). A novel approach to measuring digital entrepreneurial competencies among university students. The International Journal of Management Education, 23(2), 101180. [Google Scholar] [CrossRef]
- Jang, J., & Kim, B. (2022). The impact of potential risks on the use of exploitable online communities: The case of South Korean cyber-security communities. Sustainability, 14(8), 4828. [Google Scholar] [CrossRef]
- Jansen, A., & Ølnes, S. (2016). The nature of public e-services and their quality dimensions. Government Information Quarterly, 33(4), 647–657. [Google Scholar] [CrossRef]
- Ju, J., Meng, Q., Sun, F., Liu, L., & Singh, S. (2023). Citizen preferences and government chatbot social characteristics: Evidence from a discrete choice experiment. Government Information Quarterly, 40(3), 101785. [Google Scholar] [CrossRef]
- Ju, J., & Wang, L. (2025). The roles of trust and privacy calculus in citizen-centric services usage: Evidence from the close contact query platform in China. Behaviour & Information Technology, 44(3), 574–595. [Google Scholar] [CrossRef]
- Jussupow, E., Benbasat, I., & Heinzl, A. (2024). An integrative perspective on algorithm aversion and appreciation in decision-making. MIS Quarterly, 48, 1575–1590. [Google Scholar] [CrossRef]
- Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1991). Anomalies: The endowment effect, loss aversion, and status quo bias. Journal of Economic Perspectives, 5(1), 193–206. [Google Scholar] [CrossRef]
- Kahneman, D., & Tversky, A. (1979). On the interpretation of intuitive probability: A reply to Jonathan Cohen. Cognition, 7(4), 409–411. [Google Scholar] [CrossRef]
- Khalilzadeh, J., Ozturk, A. B., & Bilgihan, A. (2017). Security-related factors in extended UTAUT model for NFC based mobile payment in the restaurant industry. Computers in Human Behavior, 70, 460–474. [Google Scholar] [CrossRef]
- Khamitov, M., Rajavi, K., Huang, D.-W., & Hong, Y. (2024). Consumer trust: Meta-analysis of 50 years of empirical research. Journal of Consumer Research, 51(1), 7–18. [Google Scholar] [CrossRef]
- Kirshner, S. N. (2024). Psychological distance and algorithm aversion: Congruency and advisor confidence. Service Science. [Google Scholar] [CrossRef]
- Kleizen, B., Van Dooren, W., Verhoest, K., & Tan, E. (2023). Do citizens trust trustworthy artificial intelligence? Experimental evidence on the limits of ethical AI measures in government. Government Information Quarterly, 40(4), 101834. [Google Scholar] [CrossRef]
- Konuk, F. A. (2021). Trust transfer, price fairness and brand loyalty: The moderating influence of private label product type. International Journal of Retail & Distribution Management, 50(5), 658–674. [Google Scholar] [CrossRef]
- Kurfalı, M., Arifoğlu, A., Tokdemir, G., & Paçin, Y. (2017). Adoption of e-government services in Turkey. Computers in Human Behavior, 66, 168–178. [Google Scholar] [CrossRef]
- Larsen, A. G., & Følstad, A. (2024). The impact of chatbots on public service provision: A qualitative interview study with citizens and public service providers. Government Information Quarterly, 41(2), 101927. [Google Scholar] [CrossRef]
- Lee, J., Kim, H. J., & Ahn, M. J. (2011). The willingness of e-government service adoption by business users: The role of offline service quality and trust in technology. Government Information Quarterly, 28(2), 222–230. [Google Scholar] [CrossRef]
- Lee, K. C., Chung, N., & Lee, S. (2011). Exploring the influence of personal schema on trust transfer and switching costs in brick-and-click bookstores. Information & Management, 48(8), 364–370. [Google Scholar] [CrossRef]
- Li, O., Shi, Y., & Li, K. (2025). Red, rather than blue can promote fairness in decision-making. Humanities and Social Sciences Communications, 12(1), 1–10. [Google Scholar] [CrossRef]
- Li, W. (2021). The role of trust and risk in citizens’ e-government services adoption: A perspective of the extended UTAUT model. Sustainability, 13(14), 7671. [Google Scholar] [CrossRef]
- Li, X., & Wang, J. (2024). Should government chatbots behave like civil servants? The effect of chatbot identity characteristics on citizen experience. Government Information Quarterly, 41(3), 101957. [Google Scholar] [CrossRef]
- Li, X., Yang, P., Jiang, Y., & Gao, D. (2023). Influence of fear of COVID-19 on depression: The mediating effects of anxiety and the moderating effects of perceived social support and stress perception. Frontiers in Psychology, 13, 1005909. [Google Scholar] [CrossRef] [PubMed]
- Lian, J.-W. (2015). Critical factors for cloud based e-invoice service adoption in Taiwan: An empirical study. International Journal of Information Management, 35(1), 98–109. [Google Scholar] [CrossRef]
- Lian, Z., & Wang, F. (2024). Government chatbot: Empowering smart conversations with enhanced contextual understanding and reasoning. Journal of Information Science. [Google Scholar] [CrossRef]
- Liang, Z., Li, Y., & Chen, T. (2025). When AI fails, who gets the blame? Citizens’ attribution patterns in ai-induced public service failures. Journal of Chinese Governance, 1–30. [Google Scholar] [CrossRef]
- Liu, F., Yang, G., & Singhdong, P. (2024). A moderated mediation model of entrepreneurship education, competence, and environmental dynamics on entrepreneurial performance. Sustainability, 16(19), 8502. [Google Scholar] [CrossRef]
- Liu, K., & Tao, D. (2022). The roles of trust, personalization, loss of privacy, and anthropomorphism in public acceptance of smart healthcare services. Computers in Human Behavior, 127, 107026. [Google Scholar] [CrossRef]
- Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. [Google Scholar] [CrossRef]
- Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650. [Google Scholar] [CrossRef]
- Longoni, C., Cian, L., & Kyung, E. J. (2023). Algorithmic transference: People overgeneralize failures of AI in the government. Journal of Marketing Research, 60(1), 170–188. [Google Scholar] [CrossRef]
- Luo, X., Qin, M. S., Fang, Z., & Qu, Z. (2021). Artificial intelligence coaches for sales agents: Caveats and solutions. Journal of Marketing, 85(2), 14–32. [Google Scholar] [CrossRef]
- Lv, X., Yang, Y., Qin, D., & Liu, X. (2025). AI service may backfire: Reduced service warmth due to service provider transformation. Journal of Retailing and Consumer Services, 85, 104282. [Google Scholar] [CrossRef]
- Ma, D., Dong, J., & Lee, C.-C. (2025). Influence of perceived risk on consumers’ intention and behavior in cross-border e-commerce transactions: A case study of the tmall global platform. International Journal of Information Management, 81, 102854. [Google Scholar] [CrossRef]
- Mahat-Shamir, M., Zychlinski, E., & Kagan, M. (2023). Psychological distress during the COVID-19 pandemic: An integrative perspective. PLoS ONE, 18(10), e0293189. [Google Scholar] [CrossRef] [PubMed]
- Mahmud, H., Najmul Islam, A. K. M., Ahmed, S. I., & Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175, 121390. [Google Scholar] [CrossRef]
- Makasi, T., Nili, A., Desouza, K. C., & Tate, M. (2022). A typology of chatbots in public service delivery. IEEE Software, 39(3), 58–66. [Google Scholar] [CrossRef]
- Mirkovski, K., Rouibah, K., Lowry, P., Paliszkiewicz, J., & Ganc, M. (2023). Cross-country determinants of citizens’ e-government reuse intention: Empirical evidence from kuwait and poland. Information Technology & People, 37(4), 1864–1896. [Google Scholar] [CrossRef]
- Misra, S. K., Sharma, S. K., Gupta, S., & Das, S. (2023). A framework to overcome challenges to the adoption of artificial intelligence in Indian Government Organizations. Technological Forecasting and Social Change, 194, 122721. [Google Scholar] [CrossRef]
- Noh, H.-H., Rim, H. B., & Lee, B.-K. (2025). Exploring user attitudes and trust toward ChatGPT as a social actor: A UTAUT-based analysis. SAGE Open, 15(2), 21582440251345896. [Google Scholar] [CrossRef]
- Panigrahi, R. R., Shrivastava, A. K., Qureshi, K. M., Mewada, B. G., Alghamdi, S. Y., Almakayeel, N., Almuflih, A. S., & Qureshi, M. R. N. (2023). AI chatbot adoption in SMEs for sustainable manufacturing supply chain performance: A mediational research in an emerging country. Sustainability, 15(18), 13743. [Google Scholar] [CrossRef]
- Peng, C., van Doorn, J., Eggers, F., & Wieringa, J. E. (2022). The effect of required warmth on consumer acceptance of artificial intelligence in service: The moderating role of ai-human collaboration. International Journal of Information Management, 66, 102533. [Google Scholar] [CrossRef]
- Peyton, K. (2020). Does trust in government increase support for redistribution? Evidence from randomized survey experiments. American Political Science Review, 114(2), 596–602. [Google Scholar] [CrossRef]
- Pillai, R., & Sivathanu, B. (2020). Adoption of AI-based chatbots for hospitality and tourism. International Journal of Contemporary Hospitality Management, 32(10), 3199–3226. [Google Scholar] [CrossRef]
- Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88, 879–903. [Google Scholar] [CrossRef] [PubMed]
- Qi, T., Liu, H., & Huang, Z. (2025). An assistant or a friend? The role of parasocial relationship of human-computer interaction. Computers in Human Behavior, 167, 108625. [Google Scholar] [CrossRef]
- Sahu, G. P., & Gupta, M. P. (2007). Users’ acceptance of e-government: A study of Indian central excise. International Journal of Electronic Government Research (IJEGR), 3(3), 1–21. [Google Scholar] [CrossRef]
- Sarstedt, M., Hair, J. J. F., Nitzl, C., Ringle, C. M., & Howard, M. C. (2020). Beyond a tandem analysis of SEM and PROCESS: Use of PLS-SEM for mediation analyses! International Journal of Market Research, 62(3), 288–299. [Google Scholar] [CrossRef]
- Schaupp, L. C., Carter, L., & McBride, M. E. (2010). E-file adoption: A study of U.S. taxpayers’ intentions. Computers in Human Behavior, 26(4), 636–644. [Google Scholar] [CrossRef]
- Schlägel, C., & Sarstedt, M. (2016). Assessing the measurement invariance of the four-dimensional cultural intelligence scale across countries: A composite model approach. European Management Journal, 34(6), 633–649. [Google Scholar] [CrossRef]
- Silalahi, A. D. K. (2025). Can generative artificial intelligence drive sustainable behavior? A consumer-adoption model for AI-driven sustainability recommendations. Technology in Society, 83, 102995. [Google Scholar] [CrossRef]
- Silva, F. A., Shojaei, A. S., & Barbosa, B. (2023). Chatbot-based services: A study on customers’ reuse intention. Journal of Theoretical and Applied Electronic Commerce Research, 18(1), 457–474. [Google Scholar] [CrossRef]
- Song, Y., Natori, T., & Yu, X. (2024). Tracing the evolution of e-government: A visual bibliometric analysis from 2000 to 2023. Administrative Sciences, 14(7), 133. [Google Scholar] [CrossRef]
- Starke, C., & Lünich, M. (2020). Artificial intelligence for political decision-making in the European Union: Effects on citizens’ perceptions of input, throughput, and output legitimacy. Data & Policy, 2, e16. [Google Scholar] [CrossRef]
- Stewart, K. J. (2003). Trust transfer on the world wide web. Organization Science, 14(1), 5–17. [Google Scholar] [CrossRef]
- Sun, X., Pelet, J.-É., Dai, S., & Ma, Y. (2023). The effects of trust, perceived risk, innovativeness, and deal proneness on consumers’ purchasing behavior in the livestreaming social commerce context. Sustainability, 15(23), 16320. [Google Scholar] [CrossRef]
- Taeihagh, A. (2021). Governance of artificial intelligence. Policy and Society, 40(2), 137–157. [Google Scholar] [CrossRef]
- Toyoda, R., Abegão, F. R., Gill, S., & Glassey, J. (2023). Drivers of immersive virtual reality adoption intention: A multi-group analysis in chemical industry settings. Virtual Reality, 27(4), 3273–3284. [Google Scholar] [CrossRef] [PubMed]
- Turel, O., Yuan, Y., & Connelly, C. E. (2008). In justice we trust: Predicting user acceptance of e-customer services. Journal of Management Information Systems, 24(4), 123–151. [Google Scholar] [CrossRef]
- Venkatesh, V., Thong, J. Y. L., Chan, F. K. Y., & Hu, P. J. H. (2016). Managing citizens’ uncertainty in e-government services: The mediating and moderating roles of transparency and trust. Information Systems Research, 27(1), 87–111. [Google Scholar] [CrossRef]
- Verkijika, S. F., & De Wet, L. (2018). E-government adoption in Sub-Saharan Africa. Electronic Commerce Research and Applications, 30, 83–93. [Google Scholar] [CrossRef]
- Wang, C., Ni, L., Yuan, B., & Tang, M. (2025). Exploring the effect of AI warm response on consumer reuse intention in service failure. Computers in Human Behavior, 166, 108606. [Google Scholar] [CrossRef]
- Wang, D., Richards, D., Bilgin, A. A., & Chen, C. (2023). Implementation of a conversational virtual assistant for open government data portal: Effects on citizens. Journal of Information Science, 1655515221151140. [Google Scholar] [CrossRef]
- Wang, S., Min, C., Liang, Z., Zhang, Y., & Gao, Q. (2024). The decision-making by citizens: Evaluating the effects of rule-driven and learning-driven automated responders on citizen-initiated contact. Computers in Human Behavior, 161, 108413. [Google Scholar] [CrossRef]
- Wang, Y., Zhang, N., & Zhao, X. (2022). Understanding the determinants in the different government AI adoption stages: Evidence of local government chatbots in China. Social Science Computer Review, 40(2), 534–554. [Google Scholar] [CrossRef]
- Wang, Y.-F., Chen, Y.-C., & Chien, S.-Y. (2025). Citizens’ intention to follow recommendations from a government-supported AI-enabled system. Public Policy and Administration, 40(2), 372–400. [Google Scholar] [CrossRef]
- Warkentin, M., Gefen, D., Pavlou, P. A., & Rose, G. M. (2002). Encouraging citizen adoption of e-government by building trust. Electronic Markets, 12(3), 157–162. [Google Scholar] [CrossRef]
- Wei, M., Zhou, K. Z., Chen, D., Sanfilippo, M. R., Zhang, P., Chen, C., Feng, Y., & Meng, L. (2025). Understanding risk preference and risk perception when adopting high-risk and low-risk AI technologies. International Journal of Human–Computer Interaction, 1–16. [Google Scholar] [CrossRef]
- Weiner, B. (1979). A theory of motivation for some classroom experiences. Journal of Educational Psychology, 71(1), 3–25. [Google Scholar] [CrossRef]
- Xia, H., Chen, J., Qiu, Y., Liu, P., & Liu, Z. (2025). The impact of human–chatbot interaction on human–human interaction: A substitution or complementary effect. International Journal of Human–Computer Interaction, 41(2), 848–860. [Google Scholar] [CrossRef]
- Xie, Y., Zhou, P., Liang, C., Zhao, S., & Lu, W. (2025). Affiliative or self-defeating? Exploring the effect of humor types on customer forgiveness in the context of AI agents’ service failure. Journal of Business Research, 194, 115381. [Google Scholar] [CrossRef]
- Xu, W., Jung, H., & Han, J. (2022). The influences of experiential marketing factors on brand trust, brand attachment, and behavioral intention: Focused on integrated resort tourists. Sustainability, 14(20), 13000. [Google Scholar] [CrossRef]
- Yang, S., Xie, W., Chen, Y., Li, Y., Jiang, H., & Zhou, W. (2024). Warmth or competence? Understanding voice shopping intentions from human-AI interaction perspective. Electronic Commerce Research. [Google Scholar] [CrossRef]
- Yuan, M., & Yu, R. (2024). Exploring the influential factors of initial trust in autonomous cars. Ergonomics, 1–19. [Google Scholar] [CrossRef]
- Zeng, J. (2020). Artificial intelligence and China’s authoritarian governance. International Affairs, 96(6), 1441–1459. [Google Scholar] [CrossRef]
- Zhang, L., Pentina, I., & Fan, Y. (2021). Who do you choose? Comparing perceptions of human vs. robo-advisor in the context of financial services. Journal of Services Marketing, 35(5), 634–646. [Google Scholar] [CrossRef]
- Zhao, X., You, W., Zheng, Z., Shi, S., Lu, Y., & Sun, L. (2025). How do consumers trust and accept AI agents? An extended theoretical framework and empirical evidence. Behavioral Sciences, 15(3), 337. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Y., & Pan, Y.-H. (2023). A study of the impact of cultural characteristics on consumers’ behavioral intention for mobile payments: A comparison between China and Korea. Sustainability, 15(8), 6956. [Google Scholar] [CrossRef]
- Zhou, M., Liu, L., & Feng, Y. (2025). Building citizen trust to enhance satisfaction in digital public services: The role of empathetic chatbot communication. Behaviour & Information Technology, 1–20. [Google Scholar] [CrossRef]
Variable | Category | N (%) |
---|---|---|
Gender | Female | 187 (58.8) |
Male | 131 (41.2) | |
Age (years) | 20 and under | 40 (12.6) |
21–30 | 75 (23.6) | |
31–40 | 147 (54.7) | |
41–50 | 24 (7.5) | |
51 and above | 5 (1.6) | |
Highest Educational Qualification | High school diploma or below | 7 (2.2) |
Bachelor’s degree | 288 (90.2) | |
Master’s degree or higher | 23 (7.2) | |
Frequency of using AI tools | 1 (Not at all) | 0 (0.0) |
(1 = Not at all; 5 = Almost every day) | 2 (Not used much) | 6 (1.9) |
3 (Occasional use) | 78 (24.5) | |
4 (Frequently use) | 198 (62.3) | |
5 (Almost every day) | 36 (11.3) |
Variable | Factor Loadings | Cronbach’s Alpha | CR | AVE |
---|---|---|---|---|
PR | 0.837–0.929 | 0.729 | 0.877 | 0.781 |
TOG | 0.786–0.877 | 0.789 | 0.877 | 0.705 |
TOE | 0.783–0.855 | 0.762 | 0.863 | 0.678 |
EGA | 0.826–0.840 | 0.764 | 0.864 | 0.679 |
Variable | PR | TOG | TOE | EGA |
---|---|---|---|---|
PR | ||||
TOG | 0.597 | |||
TOE | 0.687 | 0.813 | ||
EGA | 0.744 | 0.534 | 0.847 |
Variable | Original Correlation | 5% Quantile Correlations | Permutation p-Value | Compositional Invariance |
---|---|---|---|---|
PR | 1.000 | 0.994 | 0.816 | Yes |
TOG | 1.000 | 0.997 | 0.619 | Yes |
TOE | 1.000 | 0.998 | 0.493 | Yes |
EGA | 0.998 | 0.996 | 0.208 | Yes |
Variable | Mean—Original Difference | 95% Confidence Interval | Equal Mean Values | Variance—Original Difference | 95% Confidence Interval | Equal Variances |
---|---|---|---|---|---|---|
PR | 0.145 | (−0.272, 0.271) | Yes | 0.103 | (−0.360, 0.352) | Yes |
TOG | 0.162 | (−0.274, 0.266) | Yes | −0.353 | (−0.645, 0.625) | Yes |
TOE | 0.136 | (−0.265, 0.278) | Yes | −0.266 | (−0.589, 0.556) | Yes |
EGA | 0.211 | (−0.263, 0.284) | Yes | −0.471 | (−0.661, 0.640) | Yes |
Variable | Agent Type: GCs | Agent Type: Human | Z-Value | p-Value |
---|---|---|---|---|
PR | 4.05 (1.67; 5.00) | 4.13 (1.33; 5.00) | −2.30 | <0.05 |
TOE | 2.35 (1.00; 4.50) | 2.14 (1.00; 4.50) | −1.09 | >0.05 |
EGA | 3.99 (1.33; 5.00) | 4.18 (1.67; 5.00) | −2.64 | <0.05 |
Variable | R-Square | R-Square Adjusted | Q2 |
---|---|---|---|
PR | 0.240 | 0.233 | 0.211 |
TOE | 0.513 | 0.502 | 0.342 |
EGA | 0.430 | 0.427 | 0.161 |
Hypothesis | Path | Path Coefficient (β) | S.D. | p-Value | Results |
---|---|---|---|---|---|
H1 | Agent type → PR | −0.202 * | 0.097 | <0.05 | Supported |
H2 | Agent type → TOE | −0.148 | 0.091 | NS | Rejected |
H3 | Agent type → EGA | 0.212 * | 0.085 | <0.05 | Supported |
H4 | TOG → PR | −0.359 *** | 0.079 | <0.001 | Supported |
H5 | PR → TOE | −0.344 *** | 0.077 | <0.001 | Supported |
H6 | TOG → TOE | 0.362 *** | 0.082 | <0.001 | Supported |
H7 | TOE → EGA | 0.641 *** | 0.045 | <0.001 | Supported |
Moderating Effects | |||||
H8 | Agent type × TOG → PR | −0.234 * | 0.119 | <0.05 | Supported |
H9 | Agent type × TOG → TOE | 0.286 * | 0.129 | <0.05 | Supported |
H10 | Agent type × TOG × PR → TOE | −0.268 * | 0.130 | <0.05 | Supported |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, Y.; Natori, T.; Yu, X. Trusting Humans or Bots? Examining Trust Transfer and Algorithm Aversion in China’s E-Government Services. Adm. Sci. 2025, 15, 308. https://doi.org/10.3390/admsci15080308
Song Y, Natori T, Yu X. Trusting Humans or Bots? Examining Trust Transfer and Algorithm Aversion in China’s E-Government Services. Administrative Sciences. 2025; 15(8):308. https://doi.org/10.3390/admsci15080308
Chicago/Turabian StyleSong, Yifan, Takashi Natori, and Xintao Yu. 2025. "Trusting Humans or Bots? Examining Trust Transfer and Algorithm Aversion in China’s E-Government Services" Administrative Sciences 15, no. 8: 308. https://doi.org/10.3390/admsci15080308
APA StyleSong, Y., Natori, T., & Yu, X. (2025). Trusting Humans or Bots? Examining Trust Transfer and Algorithm Aversion in China’s E-Government Services. Administrative Sciences, 15(8), 308. https://doi.org/10.3390/admsci15080308