Chatbot Adoption: A Systematic Literature Review
Abstract
1. Introduction
2. Conceptual Background
2.1. Definition and Evolution of Chatbots
2.2. Categories of Chatbots
2.3. Theoretical Background of Chatbot Adoption
3. Methodology
3.1. Source Types and Source Quality
3.2. Period, Search Mechanism, and Keywords
3.3. Broad First-Pass Recall and Staged Screening Strategy
3.4. Purification
3.5. Coding
4. Findings
4.1. Publication Trends, Journals, and Citations
4.1.1. Publication Trends
4.1.2. Academic Journals
4.1.3. Top Cited-Articles
4.2. Methodologies
4.2.1. Methods
4.2.2. Study Sectors
4.2.3. Study Samples
4.2.4. Geographic Locations
4.2.5. Methods of Data Analysis
4.3. Theoretical Considerations Explaining Chatbot Adoption
5. Factors Impacting Consumer Adoption of Chatbots
5.1. Drivers to Consumer Adoption of Chatbots
5.1.1. Consumer Trust in Chatbots
5.1.2. Attributes Favouring Consumer Adoption of Chatbots
| Antecedent | Dependent Variable | Number of Empirical Studies | Effect Significance | Study |
|---|---|---|---|---|
| Perceived ease of use; perceived usefulness | Adoption or acceptance of chatbots | 3 | 3 | [3,76,79] |
| Perceived ease of use; perceived usefulness | Acceptance of chatbot (1) | 1 | 1 (2) | [72] |
| Perceived ease of use; perceived usefulness | Intention of reusing chatbot | 2 | 2 | [65,80] |
| Perceived ease of use; perceived usefulness | Perceived value of Chatbots | 1 | 1 | [81] |
| Perceived ease of use | Initial Chatbot Trust | 1 | 1 | [67] |
| Perceived ease of use; perceived usefulness | Satisfaction | 1 | 1 | [82] |
| Perceived usefulness | Attitudes towards chatbots | 1 | 1 | [83] |
| Perceived usefulness | Customer experience | 1 | 1 | [84] |
| Perceived ease of use; perceived usefulness | Intention of using chatbots | 1 | 1 (3) | [85] |
| Performance Expectation | Continuance intention of using chatbot | 1 | 1 | [78] |
| Performance Expectation | Trust in chatbot | 1 | 0 (4) | [67] |
| Performance Expectation | Intentions of adopting (use or accept) chatbots | 4 | 4 | [12,77,86,87] |
5.1.3. Anthropomorphic Traits of Chatbots and Chatbots’ Personality
5.1.4. Emotions Regarding Chatbot Adoption
5.2. Barriers to Consumer Adoption of Chatbots
5.3. Why Findings Differ? Boundary Conditions in Chatbot Adoption
5.4. Robust and Context-Dependent Drivers of Chatbot Adoption
6. Discussion and Future Research
- The bulk of studies into chatbot adoption by consumers is based on theoretical models such as TAM, UTAUT and, to a lesser extent, U&G models. In contrast, we observe a dearth of research into consumer adoption of chatbots based on theoretical models such as task technology fit (TTF), the theory of reasoned action (TRA), or the consumer acceptance of technology model (CAT). TAM and UTAUT, which dominated the literature, adopt a predominantly utilitarian and cognitive perspective, with a strong emphasis on efficiency, performance, and instrumental value. By contrast, the CAT model acknowledges that consumers adopt technologies in general not only for their functional benefits, but also for their ability to generate emotional engagement, enjoyment, and stimulation. It integrates emotional dimensions such as pleasure and arousal, which capture the valence and intensity of emotional responses. Future research that integrates multiple theories and concurrent models is essential to enrich our understanding of chatbot adoption through complementarity. By jointly considering utilitarian, affective, social, and risk-related perspectives, such integrative approaches can provide a more comprehensive and nuanced understanding of the relative influence of adoption drivers and barriers.
- The effects of perceived ease of use, usefulness, and, to a lesser extent, anthropomorphism, are generally constant in the reviewee studies. The few non-significant effects reported in the literature could be explained by contextual, temporal, and design-related contingencies. First, the salience of utilitarian predictors such as perceived ease of use and usefulness appears to depend on the stage of interaction. These constructs are more influential at early adoption stages or prior to actual use, whereas their effects often weaken once users gain experience and shift attention towards trust, emotional responses, and privacy concerns. This pattern is evident in studies showing that when perceived privacy risk or technological anxiety is introduced into adoption models, traditional TAM variables become non-significant or having less effect, especially in high-risk or data-intensive service contexts. This indicates the necessity for a nuanced investigation that considers both the timing of chatbot use and the specific characteristics of the study context. On the one hand, emphasis should be placed on factors such as perceived ease of use, perceived usefulness, and perceived convenience, particularly when customers are yet to experience the chatbot service. On the other hand, perceived privacy risk and technological anxiety may assume greater relevance when the use of chatbots involves the sharing of personal information. For instance, in high-stakes or data-sensitive domains (e.g., banking, insurance, healthcare), privacy risk and creepiness become critical, overshadowing ease of use, usefulness and social cues (e.g., anthropomorphic cues). Second, future studies should consider the non-linear effect of anthropomorphic cues. While moderate anthropomorphic cues enhance social presence, trust, and emotional engagement, excessive human-likeness can trigger discomfort and uncanny valley responses, resulting in resistance rather than acceptance. Third, user characteristics (e.g., technology anxiety, need for human interaction, personality traits) further explain variability, as these traits amplify or attenuate responses to both functional and social chatbot features. Taken together, these findings suggest that the mixed or sometimes non-significant effects reported in the literature do not indicate theoretical weakness or inconsistency, but rather underscore the highly-contingent nature of chatbot adoption, which is jointly shaped by interaction timing, sector-specific risk, design intensity, and user heterogeneity. Future research should therefore move beyond testing main effects in isolation and investigate boundary conditions and interaction effects.
- Despite the recognized importance of emotions in service experiences, relatively few studies have examined consumer emotions in the context of chatbot interactions. Only a limited number of studies investigate how consumers’ emotional experiences with chatbots influence outcomes such as satisfaction and trust [106]. Consequently, chatbots that successfully convey both human-like traits and emotional cues are more likely to be perceived as engaging and effective. Existing evidence further suggests that consumers value not only the quality of information provided, but also the emotional aspects of their interactions with chatbots. Future research should therefore adopt a more integrative perspective by jointly examining utilitarian and hedonic dimensions when investigating chatbot adoption, satisfaction, trust, and continued use. Further studies are needed to explore how hedonic versus utilitarian goals shape emotional enjoyment during chatbot interactions across different consumer segments. Key research questions include whether emotionally expressive chatbots are perceived as more human, how the display of human-like emotions affects satisfaction and adoption, and how emotional mechanisms differ across hedonic and utilitarian service contexts (e.g., entertainment vs. banking). Building on recent qualitative evidence highlighting the emergence of emotional bonds and reliance on conversational AI [105], future research could also examine the sustainability of such relationships and their long-term implications for continued use and dependency. Moreover, comparative studies across sectors are needed to assess how contextual risk and user expectations (e.g., finance, e-government) interact with emotional responses such as trust, empathy, and enjoyment, to shape chatbot adoption and resistance.
- In the case of service failure, consumers show reluctance to use and trust chatbots. For instance, Ref. [107] report that customers tend to hold persistent beliefs that chatbots lack emotional competence, meaning that apologies and explanations provided after service failures could be generally better received when delivered by human employees rather than by chatbots. These findings raise questions regarding whether strategies, such as design cues, training signals, or transparency mechanisms, could help change consumers’ beliefs about the emotional capabilities of chatbots. If such beliefs can be reshaped, chatbots may become comparably effective to human employees in managing service recovery situations. For training signals, future research could compare two experimental scenarios in which customers are either informed or not that the chatbot has been trained on customer service situations, emotional expressions, or service recovery protocols. This comparison would help assess whether signaling emotional training influences consumer perceptions and responses. Regarding transparency mechanisms, future studies could examine consumer perceptions and beliefs about how chatbots process emotions, detect sentiment, and decide when to transfer an interaction to a human agent. This transparency may reduce skepticism and mitigate negative beliefs about chatbots’ emotional incapacity. Similar research could also investigate the effects of moderate anthropomorphic cues. Cues could involve emotion-sensitive wording, empathetic language (e.g., “I understand how frustrating this situation can be”), adaptive tone (e.g., a chatbot may adopt a reassuring tone when users express anxiety and a friendly tone in a hedonic context), or moderated anthropomorphic features (e.g., warm greetings, acknowledgment of frustration).
- In the literature related to the role of anthropomorphism in chatbot adoption, increasing attention is directed towards the role of chatbot personality. Anthropomorphic design not only involves visual or linguistic human-likeness, but also the attribution of stable personality traits to chatbots, which influence consumer perceptions, emotional responses, and adoption outcomes. In this regard, personality-based frameworks such as the Big Five personality model could help researchers better comprehend whether chatbots that emulate consumer-like personal traits are more effective in promoting consumer adoption, engagement, and satisfaction. For instance, [92] specifically examine the role of extroversion and demonstrate that an extroverted chatbot is better suited for interaction with extroverted users, while introverted users respond more favorably to less outgoing chatbot personalities. However, existing research remains largely limited, and a more comprehensive investigation of personality dimensions using the Big Five model is warranted. Future research could explore whether chatbot personalities can be strategically tailored through linguistic style, tone, and response patterns to better align with consumer personality profiles. Such personalization could be particularly beneficial in emotionally-charged or high-stress contexts (e.g., service disruptions and prolonged travel), where consumers are more vulnerable to negative emotions, emotional strain, and anxiety, increasing the likelihood of service interaction failures. Therefore, future studies can move beyond studying generic human-likeness to provide a more nuanced understanding of how specific human-like traits influence emotional responses, emotional engagement, trust, satisfaction, and continuance use of chatbots.
- Regarding methodological issues, the systematic review highlights a predominant reliance on quantitative studies. In Section 4, we have reported that many issues, such as an overreliance on recruiting study participants through an online platform (e.g., MTurk), may pose potential challenges to the external validity of data, as it draws from a sizable pool of candidates [108]. Indeed, a significant portion of the selected studies relies on student, convenience, or snowball samples, potentially introducing scientific bias due to participants reaching out to individuals they are familiar with. We encourage scholars to undertake more qualitative research, as it holds the potential to offer a deeper understanding of this phenomenon and contribute to the development of novel theories or the expansion of existing ones in the realm of chatbot adoption. Our systematic review also points out a dearth of longitudinal studies on the phenomenon. Longitudinal designs would be particularly valuable for capturing how utilitarian, affective, social, and risk-related drivers evolve over time and across interaction stages. Lastly, most studies are confined to specific countries, introducing potential biases. To mitigate these biases, researchers are encouraged to conduct multi-country and multi-sector studies, thereby enhancing the external validity of their research.
- Another important methodological observation concerns the geographic concentration and sampling approaches used in the empirical literature. As reported earlier in Section 4, a significant proportion of studies rely on convenience samples and are conducted in a relatively small number of countries, particularly China, the United States, India, and the United Kingdom. While these contexts provide valuable insights into chatbot adoption, this concentration may influence the generalizability of certain findings. Constructs such as trust, anthropomorphism, privacy concerns, and technological anxiety are likely to be sensitive to cultural norms, institutional environments, and regulatory frameworks. For instance, perceptions of privacy risk and data protection may differ significantly between regions with strong regulatory regimes and those with more permissive data practices, while anthropomorphic design cues may be interpreted differently depending on cultural attitudes toward human–AI interaction. Therefore, the strength and direction of these relationships may vary across cultural contexts. Future research should thus expand empirical investigations to a broader range of geographic regions and employ cross-cultural comparative designs to better assess the robustness of these mechanisms across institutional and cultural environments.
7. Study Limitations
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| AIDUA | Artificially Intelligent Device Use Acceptance |
| CASA | Computers Are Social Actors |
| DOI | Diffusion of Innovation |
| ELM | Elaboration Likelihood Model |
| HCI | Human–Computer Interaction |
| IDT | Innovation Diffusion Theory |
| NLP | Natural Language Processing |
| SEM | Structural Equation Modeling |
| SLR | Systematic Literature Review |
| SOR | Stimulus–Organism–Response |
| TAM | Technology Acceptance Model |
| TRA | Theory of Reasoned Action |
| TTF | Task–Technology Fit |
| U&G | Uses and Gratifications |
| UTAUT | Unified Theory of Acceptance and Use of Technology |
| UTAUT2 | Unified Theory of Acceptance and Use of Technology 2 |
Appendix A. Review of Prior Reviews of Chatbots and Distinct Contribution of the Present Study
| Study | Scope | Time Window | Method | Unique Contribution of Prior Study | How Our Study Differs in Terms of Research Objectives and Contributions |
| [109] | Marketing chatbots | 2000–2019 | SLR (n = 53) | Morphological taxonomy (264 variants) | Adds effect-direction coding (±/mixed). Our study integrates drivers and barriers, cross-industry moderation (retail/banking/health/public), evaluates construct redundancy. |
| [35] | Human–chatbot interaction | 2010–2020 | SLR (n = 83) | Experiential synthesis over a decade | Our study unifies cognitive and affective and technology and context drivers, codes statistical consistency. It also compares sectors rather than HCI-only focus. |
| [110] | Tourism evolution | 2016–2020 | SLR (n = 27) | Tourism-focused implementation taxonomy | Our study moves from technical taxonomy to empirically validated psychological determinants. It also conducts a multi-sector comparison. |
| [36] | Chatbots & loyalty | 2015–2021 | SLR (n = 41) | Loyalty linkage model | Our study builds full antecedent → mediator → outcome chain. It also tests industry/modality boundary effects. |
| [111] | Adoption mapping | 2000–2023 | SLR (n = 219) | Large-scale theoretical mapping | Our study codes statistical direction & contradictions. It assesses measurement heterogeneity and evaluates construct overlap. |
| [112] | AI chatbot adoption | 2017–2024 | SLR (n = 61) | Conceptual AI adoption framework | Our study integrates barriers and outcomes, compares AI vs. rule-based bots and identifies sectoral moderators. |
| [90] | Drivers & barriers | 2010–2024 | SLR (n = 84) | Determinant integration | Our study links determinants to specific behavioral outcomes. It also contrasts regulated industries vs. non-regulated industries. |
| [41] | Higher education chatbots | 2019–2023 | SLR (n = 40) | Education-focused synthesis | Our study extends beyond education and enables cross-sector boundary testing. |
| [42] | AI in higher education | 2022–2024 | SLR (n = 37) | Activity-theory model | Our study integrates TAM/UTAUT/SOR/TPB simultaneously and it tests robustness across industries. |
| [113] | Hospitality | 2019–2023 | SLR (n = 48) | Hospitality implementation roadmap | Our study compares hospitality vs. retail/banking/health; identifies contextual boundary conditions. |
| [114] | Healthcare | 2010–2024 | SLR (n = 84) | Healthcare research agenda | Our study compares healthcare to other industries. Our study identifies cross-sector invariants vs. healthcare-specific moderators (regulatory intensity). |
| [85] | TAM validation | 2008–2022 | Meta-analysis (n = 70) | Quantitative SEM validation of TAM | Our study extends beyond TAM-only, integrates affective, technological, contextual inhibitors and industry moderation. |
| [104] | Adoption intention | 2000–2023 | Meta-analysis (n = 54) | Moderator reconciliation | Our study moves beyond intention. It also integrates loyalty, satisfaction, resistance constructs. |
| [115] | Trust | 2005–2024 | Meta-analysis (n = 54) | Trust-centered synthesis | Our study situates trust within multi-driver ecosystem. It also tests interaction with usefulness/privacy. |
| [40] | Anthropomorphism | 2010–2024 | Meta-analytic SEM (n = 32) | SOR pathway validation | Our study positions anthropomorphism alongside usefulness/risk/privacy, cross-industry strength comparison. |
| [28] | Startup theories | 2020–2024 | Narrative Review | Startup theoretical mapping | Our study provides empirical cross-sector validation beyond theory classification. |
| [116] | Customer experience | 2019–2022 | Narrative Review | Marketing-focused CE synthesis | Our study integrates CE with determinants, inhibitors, effect-direction coding and provides beyond marketing-only samples. |
Appendix B. Database-Specific Search Strings
| Database | Fields Searched | Database-Specific Search Syntax |
| ABI/Inform Global (ProQuest) | Title (TI) Abstract (AB) Keywords (KW) | TI, AB, KW ((“chatbot” OR “chat bot” OR “virtual assistant” OR “chatterbot” OR “conversational agent” OR “natural language interface” OR “talkbot” OR “talk bot” OR “ai assistant” OR “artificial intelligence assistant” OR “automated assistant” OR “intelligent assistant” OR “intelligent virtual agent” OR “ai-powered Assistant” OR “automated conversational system” OR “conversational ai” OR “service chatbot” OR “customer service chatbot” OR “intelligent agent” OR “digital assistant” OR “voice assistant” OR “embodied conversational agent” OR “ECA” OR “dialogue system”) AND (“consumer” OR “user” OR “customer” OR “experience” OR “attitude” OR “emotion” OR “interaction” OR “conversation” OR “acceptation” OR “adoption” OR “technology adoption” OR “use intention” OR “behavioral intention” OR “continuance intention” OR “post-adoption” OR “continued use” OR “customer satisfaction” OR “user engagement” OR “customer engagement” OR “service experience”) AND (“theory of planned behavior” OR “TPB” OR “TAM” OR “technology acceptance model” OR “TTF” OR “task technology fit” OR “UTAUT” OR “theory of acceptance and usage of technology” OR “ITM” OR “initial trust model” OR “ELM” OR “elaboration likelihood model” OR “TRA” OR “theory of reasoned action” OR “IDT” OR “innovation diffusion theory” OR “SOR” OR “stimulus organism-response” OR “expectation-confirmation model” OR “anthropomorphism” OR “theory-of-mind” OR “U&G Model” OR “users and gratification model” OR “trust-commitment theory” OR “consumer acceptance model” OR “diffusion of Innovation theory” OR “technology readiness” OR “emotion” OR “emotional response” OR “affect” OR “affective response” OR “social presence” OR “perceived warmth” OR “perceived intelligence” OR “humor” OR “perceived humor” OR “empathy” OR “perceived empathy” OR “anthropomorphism” OR “human-likeness” OR “humanlike” OR “mind perception” OR “uncanny valley” OR “parasocial interaction” OR “symbolic interaction” OR “emotional attachment” OR “relational warmth”) |
| Business Source Premier (EBSCOhost) | Title (TI) Abstract (AB) Subject Terms (SU) | (TI OR AB OR SU) (“chatbot” OR “chat bot” OR “virtual assistant” OR “chatterbot” OR “conversational agent” OR “natural language interface” OR “talkbot” OR “talk bot” OR “ai assistant” OR “artificial intelligence assistant” OR “automated assistant” OR “intelligent assistant” OR “intelligent virtual agent” OR “ai-powered assistant” OR “automated conversational System” OR “conversational ai” OR “service chatbot” OR “customer service chatbot” OR “intelligent agent” OR “digital assistant” OR “voice assistant” OR “embodied conversational agent” OR “eca” OR “dialogue system”) AND (“consumer” OR “user” OR “customer” OR “experience” OR “attitude” OR “emotion” OR “interaction” OR “conversation” OR “acceptation” OR “adoption” OR “technology adoption” OR “use intention” OR “behavioral intention” OR “continuance intention” OR “post-adoption” OR “continued use” OR “customer satisfaction” OR “user engagement” OR “customer engagement” OR “service experience”) AND (“theory of planned behavior” OR “TPB” OR “TAM” OR “technology acceptance model” OR “TTF” OR “task technology fit” OR “UTAUT” OR “theory of acceptance and usage of technology” OR “ITM” OR “initial trust model” OR “ELM” OR “elaboration likelihood model” OR “TRA” OR “theory of reasoned action” OR “IDT” OR “innovation diffusion theory” OR “SOR” OR “stimulus organism-response” OR “expectation-confirmation model” OR “anthropomorphism” OR “theory-of-mind” OR “U&G model” OR “users and gratification model” OR “trust-commitment theory” OR “consumer acceptance model” OR “diffusion of innovation theory” OR “technology readiness” OR “emotion” OR “emotional response” OR “affect” OR “affective response” OR “social presence” OR “perceived warmth” OR “perceived intelligence” OR “humor” OR “perceived humor” OR “empathy” OR “perceived empathy” OR “anthropomorphism” OR “human-likeness” OR “humanlike” OR “mind perception” OR “uncanny valley” OR “parasocial interaction” OR “symbolic interaction” OR “emotional attachment” OR “relational warmth”)) |
| Web of Science Core Collection | Topic (TS) | TS = (“chatbot” OR “chat bot” OR “virtual assistant” OR “chatterbot” OR “conversational agent” OR “natural language interface” OR “talkbot” OR “talk bot” OR “ai assistant” OR “artificial intelligence assistant” OR “automated assistant” OR “intelligent assistant” OR “intelligent virtual agent” OR “ai-powered assistant” OR “automated conversational system” OR “conversational ai” OR “service chatbot” OR “customer service chatbot” OR “intelligent agent” OR “digital assistant” OR “voice assistant” OR “embodied conversational agent” OR “ECA” OR “dialogue system”) AND (“consumer” OR “user” OR “customer” OR “experience” OR “attitude” OR “emotion” OR “interaction” OR “conversation” OR “acceptation” OR “adoption” OR “technology adoption” OR “use intention” OR “behavioral intention” OR “continuance intention” OR “post-adoption” OR “continued use” OR “customer satisfaction” OR “user engagement” OR “customer engagement” OR “service experience”) AND (“theory of planned behavior” OR “TPB” OR “TAM” OR “technology acceptance model” OR “TTF” OR “task technology fit” OR “UTAUT” OR “theory of acceptance and usage of technology” OR “ITM” OR “initial trust model” OR “ELM” OR “elaboration likelihood model” OR “TRA” OR “theory of reasoned action” OR “IDT” OR “innovation diffusion theory” OR “SOR” OR “stimulus organism-response” OR “expectation-confirmation model” OR “anthropomorphism” OR “theory-of-mind” OR “U&G Model” OR “users and gratification model” OR “trust-commitment theory” OR “consumer acceptance model” OR “diffusion of Innovation theory” OR “technology readiness” OR “emotion” OR “emotional response” OR “affect” OR “affective response” OR “social presence” OR “perceived warmth” OR “perceived intelligence” OR “humor” OR “perceived humor” OR “empathy” OR “perceived empathy” OR “anthropomorphism” OR “human-likeness” OR “humanlike” OR “mind perception” OR “uncanny valley” OR “parasocial interaction” OR “symbolic interaction” OR “emotional attachment” OR “relational warmth”)) |
Appendix C. Data Extraction Template and Coding Definitions
| Field | Definition and Coding Rule |
| Author(s) | Full list of authors as reported in the published article. |
| Publication Year | Year of publication of the article. Articles in press were coded using the year indicated by the journal. |
| Journal | Journal in which the article was published. |
| Study Type | Coded as quantitative, qualitative, mixed methods, systematic review, or meta-analysis, based on the primary methodology used. |
| Theoretical Framework(s) | All explicitly stated theories or models used to explain chatbot adoption. Multiple theories were coded when applicable. |
| Key Variables/Constructs | Independent, mediating, moderating, and dependent variables examined in relation to chatbot adoption or usage. |
| Drivers of Adoption | Factors empirically found to have a positive and statistically significant effect on chatbot adoption, usage intention, continuance intention, satisfaction, or acceptance. |
| Barriers to Adoption | Factors empirically found to have a negative and statistically significant effect on chatbot adoption or continued use. |
| Industry/Sector | Primary industry context studied (e.g., hospitality, banking, e-commerce, healthcare). Studies examining multiple industries (more than one) were coded as multi-sector. |
| Chatbot Type | Coded as task-oriented, service-oriented, social/relational, or mixed, based on the chatbot’s primary function. |
| Geographic Context | Country or countries in which the empirical data were collected. Multi-country studies were coded separately. |
| Sample Characteristics | Sample size and participant type (e.g., consumers, students, customers, managers). |
| Analytical Method | Main analytical technique used (e.g., SEM, regression, ANOVA, thematic analysis). |
| Key Findings | Main findings related to chatbot adoption drivers and/or barriers. |
Appendix D. Drivers and Barriers to Chatbot Adoption
| Category | Construct | Freq. | Stat. | Main Outcomes | Country Concentration | Industry Context | Chatbot Type |
| Utilitarian | Perceived Usefulness | 39 | A | Adoption, continuance | China, Germany, UK | Hospitality, Retail, Education | Transactional & Informational (Text) |
| Utilitarian | Perceived Ease of Use | 32 | B | Adoption, trust | China, India, Korea, UK | Hospitality, E-commerce | Text-based |
| Utilitarian | Performance Expectancy | 22 | A | Adoption | China, India, Korea, UK | Education, Banking | Transactional |
| Utilitarian | Effort Expectancy | 18 | B | Adoption intention | China, UK | Education, Retail | Transactional |
| Utilitarian | Facilitating Conditions | 9 | B | Continuance | India, China, UK | Education | Transactional |
| Utilitarian | Compatibility | 6 | A | Initial trust | China | Retail | Transactional |
| Utilitarian | Information Quality | 11 | A | Satisfaction | China | Finance, E-commerce | Informational |
| Utilitarian | System Quality | 7 | A | Adoption | Multi-country | Service sectors | Transactional |
| Utilitarian | Service Quality | 8 | A | Experience | China | Hospitality | Service bots |
| Utilitarian | Convenience | 5 | A | Trust | China | Online retail | Informational |
| Utilitarian | Response Accuracy | 6 | A | Trust | China | Banking | Transactional |
| Utilitarian | Perceived Intelligence | 10 | B | Adoption | China, US | Retail | AI-powered |
| Utilitarian | Perceived Competence | 6 | A | Trust | China | Service sectors | Transactional |
| Utilitarian | Task–Technology Fit | 4 | A | Adoption intention | China | Education | Transactional |
| Utilitarian | Perceived Value | 5 | A | Adoption | Multi-country | Mixed sectors | Transactional |
| Trust-Based | Trust (overall) | 13 | A | Adoption, loyalty | China, UK | Banking, Insurance | Transactional |
| Trust-Based | Initial Trust | 6 | A | Adoption | China | Retail | Transactional |
| Trust-Based | Trust in Data Protection | 4 | A | Adoption | EU, China | Banking, Public services | Data-intensive |
| Trust-Based | Trust in Functionality | 3 | A | Usage | China | Finance | Transactional |
| Trust-Based | Credibility | 4 | A | Trust | China | Banking | Transactional |
| Social | Anthropomorphism | 59 | B | Adoption, satisfaction | China, US, UK | Hospitality, Retail | Text & Voice; Social bots |
| Social | Human-likeness | 14 | B | Adoption | China | Retail | Voice-based higher |
| Social | Empathy | 8 | A | Engagement | UK, China | Hospitality | Service & Social bots |
| Social | Social Presence | 11 | A | Trust | China, US | Service sectors | Voice-based |
| Social | Avatar Presence | 6 | B | Adoption | China | Retail | Visual bots |
| Social | Chatbot Personality | 7 | A | Adoption | US, China | Retail | Social bots |
| Social | Personality Congruence | 4 | A | Engagement | China | Retail | Social bots |
| Social | Mind Perception | 5 | A | Closeness | China | Mixed sectors | Anthropomorphic bots |
| Social | Parasocial Interaction | 3 | A | Attachment | UK, Qatar | Social bots | Companion bots |
| Social | Perceived Warmth | 5 | A | Trust | China | Hospitality | Service bots |
| Affective | Enjoyment | 9 | B | Continuance | China, India | Hospitality | Conversational bots |
| Affective | Hedonic Motivation | 8 | A | Adoption | China (Gen Z) | Retail, Education | Conversational |
| Affective | Emotional Attachment | 4 | A | Continuance | UK | Social bots | Companion bots |
| Affective | Intimacy (Emojis) | 3 | A | Satisfaction | China | Retail | Text-based |
| Affective | Engagement | 6 | A | Adoption | China | Retail | Interactive bots |
| Affective | Experiential Value | 4 | A | Experience | China | Hospitality | Service bots |
| Barrier | Perceived Risk | 4 | C | Adoption | India, China | Banking, Insurance | Transactional |
| Barrier | Privacy Concerns | 6 | C | Adoption | EU, US | Banking, Public | Voice/Data bots |
| Barrier | Technological Anxiety | 4 | C | Continuance | India | Mixed sectors | AI-powered |
| Barrier | Creepiness | 3 | C | Trust | China | Retail | Highly anthropomorphic |
| Barrier | Uncanny Valley | 2 | C | Adoption | China | Retail | Voice-based |
| Barrier | Perceived Time Risk | 2 | C | Continuance | India | Service | Transactional |
| Barrier | Perceived Sacrifice | 2 | B | Experience | China | E-commerce | Transactional |
| Barrier | AI Skepticism | 2 | C | Adoption | US | Public services | AI bots |
| Barrier | Low Technology | 2 | C | Adoption | India | Mixed | AI-powered |
References
- Luo, X.; Tong, S.; Fang, Z.; Qu, Z. Frontiers: Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases. Mark. Sci. 2019, 38, 937–947. [Google Scholar] [CrossRef]
- Priya, B.; Sharma, V. Exploring users’ adoption intentions of intelligent virtual assistants in financial services: An anthropomorphic perspectives and socio-psychological perspectives. Comput. Hum. Behav. 2023, 148, 107912. [Google Scholar] [CrossRef]
- Rese, A.; Ganster, L.; Baier, D. Chatbots in retailers’ customer communication: How to measure their acceptance? J. Retail. Consum. Serv. 2020, 56, 102176. [Google Scholar] [CrossRef]
- Ciechanowski, L.; Przegalinska, A.; Magnuski, M.; Gloor, P. In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Futur. Gener. Comput. Syst. 2019, 92, 539–548. [Google Scholar] [CrossRef]
- Eren, B.A. Determinants of customer satisfaction in chatbot use: Evidence from a banking application in Turkey. Int. J. Bank. Mark. 2021, 39, 294–311. [Google Scholar] [CrossRef]
- Sheehan, B.; Jin, H.S.; Gottlieb, U. Customer service chatbots: Anthropomorphism and adoption. J. Bus. Res. 2020, 115, 14–24. [Google Scholar] [CrossRef]
- Nadarzynski, T.; Miles, O.; Cowie, A.; Ridge, D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digit. Health 2019, 5, 205520761987180. [Google Scholar] [CrossRef]
- Pérez, J.Q.; Daradoumis, T.; Puig, J.M.M. Rediscovering the use of chatbots in education: A systematic literature review. Comput. Appl. Eng. Educ. 2020, 28, 1549–1565. [Google Scholar] [CrossRef]
- Cai, D.; Li, H.; Law, R. Anthropomorphism and OTA chatbot adoption: A mixed methods study. J. Travel Tour. Mark. 2022, 39, 228–255. [Google Scholar] [CrossRef]
- Chi, O.H.; Denton, G.; Gursoy, D. Artificially intelligent device use in service delivery: A systematic review, synthesis, and research agenda. J. Hosp. Mark. Manag. 2020, 29, 757–786. [Google Scholar] [CrossRef]
- Corti, K.; Gillespie, A. Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human. Comput. Hum. Behav. 2016, 58, 431–442. [Google Scholar] [CrossRef]
- Terblanche, N.; Kidd, M. Adoption Factors and Moderating Effects of Age and Gender That Influence the Intention to Use a Non-Directive Reflective Coaching Chatbot. SAGE Open 2022, 12, 215824402210961. [Google Scholar] [CrossRef]
- Gkinko, L.; Elbanna, A. The appropriation of conversational AI in the workplace: A taxonomy of AI chatbot users. Int. J. Inf. Manag. 2023, 69, 102568. [Google Scholar] [CrossRef]
- Jang, M.; Jung, Y.; Kim, S. Investigating managers’ understanding of chatbots in the Korean financial industry. Comput. Hum. Behav. 2021, 120, 106747. [Google Scholar] [CrossRef]
- Lei, S.I.; Shen, H.; Ye, S. A comparison between chatbot and human service: Customer perception and reuse inten-tion. Int. J. Contemp. Hosp. Manag. 2021, 33, 3977–3995. [Google Scholar] [CrossRef]
- Liu, M.; Yang, Y.; Ren, Y.; Jia, Y.; Ma, H.; Luo, J.; Fang, S.; Qi, M.; Zhang, L. What influences consumer AI chatbot use intention? An appli-cation of the extended technology acceptance model. J. Hosp. Tour. Technol. 2024, 15, 667–689. [Google Scholar] [CrossRef]
- Liu, G.L.; Darvin, R.; Ma, C. Exploring AI-mediated informal digital learning of English (AI-IDLE): A mixed-method investigation of Chinese EFL learners’ AI adoption and experiences. Comput. Assist. Lang. Learn. 2024, 38, 1632–1660. [Google Scholar] [CrossRef]
- Leung, C.H.; Chan, W.T.Y. Retail chatbots: The challenges and opportunities of conversational commerce. Soc. Media Mark. 2020, 8, 68. [Google Scholar] [CrossRef]
- Ayanwale, M.A.; Molefi, R.R. Exploring intention of undergraduate students to embrace chatbots: From the vantage point of Lesotho. Int. J. Educ. Technol. High. Educ. 2024, 21, 20. [Google Scholar] [CrossRef]
- Tian, W.; Ge, J.; Zhao, Y.; Zheng, X. AI Chatbots in Chinese higher education: Adoption, perception, and influence among graduate students—An integrated analysis utilizing UTAUT and ECM models. Front. Psychol. 2024, 15, 1268549. [Google Scholar] [CrossRef]
- Wang, C.; Li, S.; Lin, N.; Zhang, X.; Han, Y.; Wang, X.; Liu, D.; Tan, X.; Pu, D.; Li, K.; et al. Application of Large Language Models in Medical Training Evaluation—Using ChatGPT as a Standardized Patient: Multimetric Assessment. J. Med. Internet Res. 2025, 27, e59435. [Google Scholar] [CrossRef]
- Fatima, J.K.; Khan, M.I.; Bahmannia, S.; Chatrath, S.K.; Dale, N.F.; Johns, R. Rapport with a chatbot? The un-derlying role of anthropomorphism in socio-cognitive perceptions of rapport and e-word of mouth. J. Retail. Consum. Serv. 2024, 77, 103666. [Google Scholar] [CrossRef]
- Esiyok, E.; Gokcearslan, S.; Kucukergin, K.G. Acceptance of Educational Use of AI Chatbots in the Context of Self-Directed Learning with Technology and ICT Self-Efficacy of Undergraduate Students. Int. J. Hum.–Comput. Interact. 2025, 41, 641–650. [Google Scholar] [CrossRef]
- Agnihotri, A.; Bhattacharya, S. Chatbots’ effectiveness in service recovery. Int. J. Inf. Manag. 2024, 76, 102679. [Google Scholar] [CrossRef]
- Zhang, H.; Qiu, S.; Wang, X.; Yuan, X. Robots or humans: Who is more effective in promoting hospitality services? Int. J. Hosp. Manag. 2024, 119, 103728. [Google Scholar] [CrossRef]
- Chin, H.; Yi, M.Y. Exploring the influence of user characteristics on verbal aggression towards social chat-bots. Behav. Inf. Technol. 2025, 44, 1576–1594. [Google Scholar] [CrossRef]
- Dastane, O.; Ooi, M.Y.; Aw, E.C.X.; Shyu, W.H.; Tan, G.W.H. Skip the AI-BOTs: Let’s have real conversations in human-centric services. J. Consum. Mark. 2025, 42, 484–497. [Google Scholar] [CrossRef]
- Najarian, A.; Hejazinia, R. An Examination of Technology Acceptance Models for Chatbot Adoption in Startup E-businesses: A Narrative Review. Int. J. Manag. Account. Econ. 2025, 12, 471. [Google Scholar] [CrossRef]
- Singh, D.; Kunja, S.R. Engaging guests for a greener tomorrow: Examining the role of hotel chatbots in encour-aging pro-environmental behavior. Tour. Hosp. Res. 2025. [Google Scholar] [CrossRef]
- Song, M.; Zhang, H.; Xing, X.; Duan, Y. Appreciation vs. apology: Research on the influence mechanism of chatbot service recovery based on politeness theory. J. Retail. Consum. Serv. 2023, 73, 103323. [Google Scholar] [CrossRef]
- Xiao, R.; Yazan, M.; Situmeang, F.B.I. Rethinking Conversation Styles of Chatbots from the Customer Perspective: Relationships between Conversation Styles of Chatbots, Chatbot Acceptance, and Perceived Tie Strength and Perceived Risk. Int. J. Hum.–Comput. Interact. 2025, 41, 1343–1363. [Google Scholar] [CrossRef]
- Ischen, C.; Araujo, T.; van Noort, G.; Voorveld, H.; Smit, E. “I Am Here to Assist You Today”: The Role of Entity, Interactivity and Experiential Perceptions in Chatbot Persuasion. J. Broadcast. Electron. Media 2020, 64, 615–639. [Google Scholar] [CrossRef]
- Tranfield, D.; Denyer, D.; Smart, P. Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review. Br. J. Manag. 2003, 14, 207–222. [Google Scholar] [CrossRef]
- Chen, H.L.; Vicki Widarso, G.; Sutrisno, H.A. ChatBot for Learning Chinese: Learning Achievement and Technol-ogy Acceptance. J. Educ. Comput. Res. 2020, 58, 1161–1189. [Google Scholar] [CrossRef]
- Rapp, A.; Curti, L.; Boldi, A. The human side of human-chatbot interaction: A systematic literature review of ten years of research on text-based chatbots. Int. J. Hum.–Comput. Stud. 2021, 151, 102630. [Google Scholar] [CrossRef]
- Jenneboer, L.; Herrando, C.; Constantinides, E. The Impact of Chatbots on Customer Loyalty: A Systematic Liter-ature Review. J. Theor. Appl. Electron. Commer. Res. 2022, 17, 212–229. [Google Scholar] [CrossRef]
- Żyminkowska, K.; Zachurzok-Srebrny, E. The Role of Artificial Intelligence in Customer Engagement and Social Media Marketing—Implications from a Systematic Review for the Tourism and Hospitality Sectors. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 184. [Google Scholar] [CrossRef]
- Ling, E.C.; Tussyadiah, I.; Tuomi, A.; Stienmetz, J.; Ioannou, A. Factors influencing users’ adoption and use of conversational agents: A systematic review. Psychol. Mark. 2021, 38, 1031–1051. [Google Scholar] [CrossRef]
- Greilich, A.; Bremser, K.; Wüst, K. Consumer Response to Anthropomorphism of Text-Based AI Chatbots: A Sys-tematic Literature Review and Future Research Directions. Int. J. Consum. Stud. 2025, 49, e70108. [Google Scholar] [CrossRef]
- Zhang, F.; Sheng, D. Anthropomorphism’s impact on chatbot adoption: A meta-analytic structural equation modeling approach. Technol. Soc. 2026, 84, 103099. [Google Scholar] [CrossRef]
- Anjulo Lambebo, E.; Chen, H.L. Chatbots in higher education: A systematic review. Interact. Learn. Environ. 2025, 33, 2781–2807. [Google Scholar] [CrossRef]
- Ma, W.; Ma, W.; Hu, Y.; Bi, X. The who, why, and how of ai-based chatbots for learning and teaching in higher education: A systematic review. Educ. Inf. Technol. 2025, 30, 7781–7805. [Google Scholar] [CrossRef]
- Adamopoulou, E.; Moussiades, L. Chatbots: History, technology, and applications. Mach. Learn. Appl. 2020, 2, 100006. [Google Scholar] [CrossRef]
- Huang, A.; Chao, Y.; De La Mora Velasco, E.; Bilgihan, A.; Wei, W. When artificial intelligence meets the hospitality and tourism industry: An assessment framework to inform theory and management. J. Hosp. Tour. Insights 2022, 5, 1080–1100. [Google Scholar] [CrossRef]
- Bouhia, M.; Rajaobelina, L.; PromTep, S.; Arcand, M.; Ricard, L. Drivers of privacy concerns when interacting with a chatbot in a customer service encounter. Int. J. Bank. Mark. 2022, 40, 1159–1181. [Google Scholar] [CrossRef]
- Beattie, A.; Edwards, A.P.; Edwards, C. A Bot and a Smile: Interpersonal Impressions of Chatbots and Humans Using Emoji in Computer-mediated Communication. Commun. Stud. 2020, 71, 409–427. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. J. Clin. Epidemiol. 2021, 134, 178–189. [Google Scholar] [CrossRef] [PubMed]
- Paul, J.; Lim, W.M.; O’Cass, A.; Hao, A.W.; Bresciani, S. Scientific Procedures and Rationales for Systematic Literature Reviews (SPAR-4-SLR). Int. J. Consum. Stud. 2021, 45, O1–O16. [Google Scholar] [CrossRef]
- Gohoungodji, P.; N’Dri, A.B.; Latulippe, J.M.; Matos, A.L.B. What is stopping the automotive industry from going green? A systematic review of barriers to green innovation in the automotive industry. J. Clean. Prod. 2020, 277, 123524. [Google Scholar] [CrossRef]
- Nordheim, C.B.; Følstad, A.; Bjørkli, C.A. An Initial Model of Trust in Chatbots for Customer Service—Findings from a Questionnaire Study. Interact. Comput. 2019, 31, 317–335. [Google Scholar] [CrossRef]
- Lappeman, J.; Marlie, S.; Johnson, T.; Poggenpoel, S. Trust and digital privacy: Willingness to disclose personal information to banking chatbot services. J. Financ. Serv. Mark. 2023, 28, 337–357. [Google Scholar] [CrossRef]
- Yuriev, A.; Boiral, O.; Francoeur, V.; Paillé, P. Overcoming the barriers to pro-environmental behaviors in the workplace: A systematic review. J. Clean. Prod. 2018, 182, 379–394. [Google Scholar] [CrossRef]
- Gagnon, J.; Halilem, N.; Bouchard, J. A relay race or an ironman? A systematic review of the literature on inno-vation in the mining sector. Resour. Policy 2024, 98, 105363. [Google Scholar] [CrossRef]
- Adam, M.; Wessel, M.; Benlian, A. AI-based chatbots in customer service and their effects on user compliance. Electron. Mark. 2021, 31, 427–445. [Google Scholar] [CrossRef]
- Araujo, T. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput. Hum. Behav. 2018, 85, 183–189. [Google Scholar] [CrossRef]
- Chung, M.; Ko, E.; Joung, H.; Kim, S.J. Chatbot e-service and customer satisfaction regarding luxury brands. J. Bus. Res. 2020, 117, 587–595. [Google Scholar] [CrossRef]
- Hill, J.; Randolph Ford, W.; Farreras, I.G. Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Comput. Hum. Behav. 2015, 49, 245–250. [Google Scholar] [CrossRef]
- Go, E.; Sundar, S.S. Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Comput. Hum. Behav. 2019, 97, 304–316. [Google Scholar] [CrossRef]
- Fryer, L.K.; Ainley, M.; Thompson, A.; Gibson, A.; Sherlock, Z. Stimulating and sustaining interest in a language course: An experimental comparison of Chatbot and Human task partners. Comput. Hum. Behav. 2017, 75, 461–468. [Google Scholar] [CrossRef]
- Mou, Y.; Xu, K. The media inequality: Comparing the initial human-human and human-AI social interactions. Comput. Hum. Behav. 2017, 72, 432–440. [Google Scholar] [CrossRef]
- Zaki, H.S.; Al-Romeedy, B.S. Chatbot symbolic recovery and customer forgiveness: A moderated mediation model. J. Hosp. Tour. Technol. 2024, 15, 610–628. [Google Scholar] [CrossRef]
- Hasan, R.; Shams, R.; Rahman, M. Consumer trust and perceived risk for voice-controlled artificial intelligence: The case of Siri. J. Bus. Res. 2021, 131, 591–597. [Google Scholar] [CrossRef]
- Pillai, R.; Ghanghorkar, Y.; Sivathanu, B.; Algharabat, R.; Rana, N.P. Adoption of artificial intelligence (AI) based employee experience (EEX) chatbots. Inf. Technol. People 2024, 37, 449–478. [Google Scholar] [CrossRef]
- Skjuve, M.; Følstad, A.; Fostervold, K.I.; Brandtzaeg, P.B. My Chatbot Companion—A Study of Human-Chatbot Re-lationships. Int. J. Hum.–Comput. Stud. 2021, 149, 102601. [Google Scholar] [CrossRef]
- Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, S.M.C. I, Chatbot: Modeling the determinants of users’ satisfaction and con-tinuance intention of AI-powered service agents. Telemat. Inform. 2020, 54, 101473. [Google Scholar] [CrossRef]
- Meyer-Waarden, L.; Pavone, G.; Poocharoentou, T.; Prayatsup, P.; Ratinaud, M.; Tison, A.; Torné, S. How Service Quality Influences Customer Acceptance and Usage of Chatbots? J. Serv. Manag. Res. 2020, 4, 35–51. [Google Scholar] [CrossRef]
- Mostafa, R.B.; Kasamani, T. Antecedents and consequences of chatbot initial trust. Eur. J. Mark. 2022, 56, 1748–1771. [Google Scholar] [CrossRef]
- Ameen, N.; Tarhini, A.; Reppel, A.; Anand, A. Customer experiences in the age of artificial intelligence. Comput. Hum. Behav. 2021, 114, 106548. [Google Scholar] [CrossRef]
- De Cicco, R.; Silva, S.C.; Alparone, F.R. Millennials’ attitude toward chatbots: An experimental study in a social relationship perspective. Int. J. Retail. Distrib. Manag. 2020, 48, 1213–1233. [Google Scholar] [CrossRef]
- Pizzi, G.; Scarpi, D.; Pantano, E. Artificial intelligence and the new forms of interaction: Who has the control when interacting with a chatbot? J. Bus. Res. 2021, 129, 878–890. [Google Scholar] [CrossRef]
- Yen, C.; Chiang, M.C. Trust me, if you can: A study on the factors that influence consumers’ purchase intention triggered by chatbots based on brain image evidence and self-reported assessments. Behav. Inf. Technol. 2021, 40, 1177–1194. [Google Scholar] [CrossRef]
- Cheng, X.; Bao, Y.; Zarifis, A.; Gong, W.; Mou, J. Exploring consumers’ response to text-based chatbots in e-commerce: The moderating role of task complexity and chatbot disclosure. Internet Res. 2022, 32, 496–517. [Google Scholar] [CrossRef]
- Zungu, N.P.; Amegbe, H.; Hanu, C.; Asamoah, E.S. AI-driven self-service for enhanced customer experience out-comes in the banking sector. Cogent Bus. Manag. 2025, 12, 2450295. [Google Scholar] [CrossRef]
- Zhang, K.; Luo, J.; Huang, Q.; Zhang, K.; Du, J. The Effect of Perceived Interactivity on Continuance Intention to Use AI Conversational Agents: A Two-Stage Hybrid PLS-ANN Approach. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 255. [Google Scholar] [CrossRef]
- Wang, X.; Lin, X.; Shao, B. Artificial intelligence changes the way we work: A close look at innovating with chat-bots. J. Assoc. Inf. Sci. Technol. 2023, 74, 339–353. [Google Scholar] [CrossRef]
- Pillai, R.; Sivathanu, B. Adoption of AI-based chatbots for hospitality and tourism. Int. J. Contemp. Hosp. Manag. 2020, 32, 3199–3226. [Google Scholar] [CrossRef]
- Melián-González, S.; Gutiérrez-Taño, D.; Bulchand-Gidumal, J. Predicting the intentions to use chatbots for travel and tourism. Curr. Issues Tour. 2021, 24, 192–210. [Google Scholar] [CrossRef]
- Balakrishnan, J.; Abed, S.S.; Jones, P. The role of meta-UTAUT factors, perceived anthropomorphism, perceived intelligence, and social self-efficacy in chatbot-based services? Technol. Forecast. Soc. Change 2022, 180, 121692. [Google Scholar] [CrossRef]
- Aslam, W.; Ahmed Siddiqui, D.; Arif, I.; Farhat, K. Chatbots in the frontline: Drivers of acceptance. Kybernetes 2022, 52, 3781–3810. [Google Scholar] [CrossRef]
- Silva, S.C.; De Cicco, R.; Vlačić, B.; Elmashhara, M.G. Using chatbots in e-retailing—How to mitigate perceived risk and enhance the flow experience. Int. J. Retail. Distrib. Manag. 2023, 51, 285–305. [Google Scholar] [CrossRef]
- Chhikara, D.; Sharma, R.; Kaushik, K. Indian E-commerce consumer and their acceptance towards chatbots. Acad. Mark. Stud. J. 2022, 26, 1–10. [Google Scholar]
- Pereira, T.; Limberger, P.F.; Minasi, S.M.; Buhalis, D. New Insights into Consumers’ Intention to Continue Using Chatbots in the Tourism Context. J. Qual. Assur. Hosp. Tour. 2022, 25, 754–780. [Google Scholar] [CrossRef]
- Van den Broeck, E.; Zarouali, B.; Poels, K. Chatbot advertising effectiveness: When does the message get through? Comput. Hum. Behav. 2019, 98, 150–157. [Google Scholar] [CrossRef]
- Rizomyliotis, I.; Kastanakis, M.N.; Giovanis, A.; Konstantoulaki, K.; Kostopoulos, I. “How mAy I help you today?” The use of AI chatbots in small family businesses and the moderating role of customer affective commitment. J. Bus. Res. 2022, 153, 329–340. [Google Scholar] [CrossRef]
- Gopinath, K.; Kasilingam, D. Antecedents of intention to use chatbots in service encounters: A meta-analytic review. Int. J. Consum. Stud. 2023, 47, 2367–2395. [Google Scholar] [CrossRef]
- Ragheb, M.A.; Tantawi, P.; Farouk, N.; Hatata, A. Investigating the ac-ceptance of applying chat-bot (Artificial intelligence) technology among higher education students in Egypt. Int. J. High. Educ. Manag. 2022, 8, 1–13. [Google Scholar] [CrossRef]
- Paraskevi, G.; Saprikis, V.; Avlogiaris, G. Modeling Nonusers’ Behavioral Intention towards Mobile Chatbot Adoption: An Extension of the UTAUT2 Model with Mobile Service Quality Determinants. Hum. Behav. Emerg. Technol. 2023, 2023, 8859989. [Google Scholar] [CrossRef]
- Pelau, C.; Dabija, D.C.; Ene, I. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the ser-vice industry. Comput. Hum. Behav. 2021, 122, 106855. [Google Scholar] [CrossRef]
- Schuetzler, R.M.; Grimes, G.M.; Scott Giboney, J. The impact of chatbot conversational skill on engagement and perceived humanness. J. Manag. Inf. Syst. 2020, 37, 875–900. [Google Scholar] [CrossRef]
- Alharbi, N.; Ud Din, F.; Paul, D.; Sadgrove, E. Driving AI chatbot adoption: A systematic review of factors, barriers, and future research directions. J. Open Innov. Technol. Mark. Complex. 2025, 11, 100590. [Google Scholar] [CrossRef]
- Lee, S.; Lee, N.; Sah, Y.J. Perceiving a Mind in a Chatbot: Effect of Mind Perception and Social Cues on Co-presence, Closeness, and Intention to Use. Int. J. Hum.–Comput. Interact. 2020, 36, 930–940. [Google Scholar] [CrossRef]
- Shumanov, M.; Johnson, L. Making conversations with chatbots more personalized. Comput Hum Behav 2021, 117, 106627. [Google Scholar] [CrossRef]
- Rajaobelina, L.; Prom Tep, S.; Arcand, M.; Ricard, L. Creepiness: Its antecedents and impact on loyalty when interacting with a chatbot. Psychol. Mark. 2021, 38, 2339–2356. [Google Scholar] [CrossRef]
- Mehra, B. Chatbot personality preferences in Global South urban English speakers. Soc. Sci. Humanit. Open 2021, 3, 100131. [Google Scholar] [CrossRef]
- Drouin, M.; Sprecher, S.; Nicola, R.; Perkins, T. Is chatting with a sophisticated chatbot as good as chatting online or FTF with a stranger? Comput. Hum. Behav. 2022, 128, 107100. [Google Scholar] [CrossRef]
- Zarouali, B.; Makhortykh, M.; Bastian, M.; Araujo, T. Overcoming polarization with chatbot news? Investigating the impact of news content containing opposing views on agreement and credibility. Eur. J. Commun. 2021, 36, 53–68. [Google Scholar] [CrossRef]
- Klein, K.; Martinez, L.F. The impact of anthropomorphism on customer satisfaction in chatbot commerce: An experimental study in the food sector. Electron. Commer. Res. 2023, 23, 2789–2825. [Google Scholar] [CrossRef]
- Shen, W.; Li, S. Influence of the Use of Emojis by Chatbots on Interaction Satisfaction. J. Mark. Dev. Compet. 2025, 19, 17. [Google Scholar] [CrossRef]
- Biloš, A.; Budimir, B. Understanding the Adoption Dynamics of ChatGPT among Generation Z: Insights from a Modified UTAUT2 Model. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 863–879. [Google Scholar] [CrossRef]
- Al-Shafei, M. Navigating Human-Chatbot Interactions: An Investigation into Factors Influencing User Satis-faction and Engagement. Int. J. Hum.–Comput. Interact. 2025, 41, 411–428. [Google Scholar] [CrossRef]
- Mou, Y.; Meng, X. Alexa, it is creeping over me—Exploring the impact of privacy concerns on consumer resistance to intelligent voice assistants. Asia Pac. J. Mark. Logist. 2024, 36, 261–292. [Google Scholar] [CrossRef]
- Alabed, A.; Javornik, A.; Gregory-Smith, D.; Casey, R. More than just a chat: A taxonomy of consumers’ relation-ships with conversational AI agents and their well-being implications. Eur. J. Mark. 2024, 58, 373–409. [Google Scholar] [CrossRef]
- Kwangsawad, A.; Jattamart, A. Overcoming customer innovation resistance to the sustainable adoption of chatbot services: A community-enterprise perspective in Thailand. J. Innov. Knowl. 2022, 7, 100211. [Google Scholar] [CrossRef]
- Cheng, Y.; Jiang, H. How Do AI-driven Chatbots Impact User Experience? Examining Gratifications, Perceived Privacy Risk, Satisfaction, Loyalty, and Continued Use. J. Broadcast. Electron. Media 2020, 64, 592–614. [Google Scholar] [CrossRef]
- Li, B.; Chen, Y.; Liu, L.; Zheng, B. Users’ intention to adopt artificial intelligence-based chatbot: A meta-analysis. Serv. Ind. J. 2023, 43, 1117–1139. [Google Scholar] [CrossRef]
- Magno, F.; Dossena, G. The effects of chatbots’ attributes on customer relationships with brands: PLS-SEM and importance–performance map analysis. TQM J. 2022, 35, 1156–1169. [Google Scholar] [CrossRef]
- Zhang, B.; Zhu, Y.; Deng, J.; Zheng, W.; Liu, Y.; Wang, C.; Zeng, R. “I Am Here to Assist Your Tourism”: Predicting Con-tinuance Intention to Use AI-based Chatbots for Tourism. Does Gender Really Matter? Int. J. Hum.–Comput. Interact. 2023, 39, 1887–1903. [Google Scholar] [CrossRef]
- Stritch, J.M.; Pedersen, M.J.; Taggart, G. The Opportunities and Limitations of Using Mechanical Turk (MTURK) in Public Administration and Management Scholarship. Int. Public Manag. J. 2017, 20, 489–511. [Google Scholar] [CrossRef]
- Ramesh, A.; Chawla, V. Chatbots in Marketing: A Literature Review Using Morphological and Co-Occurrence Analyses. J. Interact. Mark. 2022, 57, 472–496. [Google Scholar] [CrossRef]
- Calvaresi, D.; Ibrahim, A.; Calbimonte, J.P.; Schegg, R.; Fragniere, E.; Schumacher, M. The Evolution of Chatbots in Tourism: A Systematic Literature Review. In Information and Communication Technologies in Tourism; Wörndl, W., Koo, C., Stienmetz, J.L., Eds.; Springer International Publishing: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
- Alsharhan, A.; Al-Emran, M.; Shaalan, K. Chatbot Adoption: A Multiperspective Systematic Review and Future Research Agenda. IEEE Trans. Eng. Manag. 2023, 71, 10232–10244. [Google Scholar] [CrossRef]
- Yatawara, K.; Sampath, T.; Kalupahana, P.L.; Rathnayake, S.; Jayasuriya, N.; Rathnayake, N. A Systematic Review on Consumer Adoption of AI-driven Chatbots. Vis. J. Bus. Perspect. 2025, 2, 1–14. [Google Scholar] [CrossRef]
- Sam, S.J.I.; Jasim, K.M. Diving into the technology: A systematic literature review on strategic use of chatbots in hospitality service encounters. Manag. Rev. Q. 2025, 75, 527–555. [Google Scholar] [CrossRef]
- Jasim, K.M.; Malathi, A.; Bhardwaj, S.; Aw, E.C.X. A systematic review of AI-based chatbot usages in healthcare services. J. Health Organ. Manag. 2025, 39, 877–899. [Google Scholar] [CrossRef] [PubMed]
- Zhang, F.; Li, Y.; Sheng, D. From bias to belief: A meta-analysis of user trust in chatbot adoption and its antecedents. J. Electron. Commer. Res. 2025, 25, 266–305. [Google Scholar]
- Bakkouri, B.E.; Raki, S.; Belgnaoui, T. The Role of Chatbots in Enhancing Customer Experience: Literature Review. Procedia Comput. Sci. 2022, 203, 432–437. [Google Scholar] [CrossRef]





| Categories | Keywords |
|---|---|
| Category 1: | (“chatbot” OR “chat bot” OR “virtual assistant” OR “chatterbot” OR “conversational agent” OR “natural language interface” OR “talkbot” OR “talk bot” OR “AI assistant” OR “artificial intelligence assistant” OR “automated assistant” OR “intelligent assistant” OR “intelligent virtual agent” OR “AI-powered assistant” OR “automated conversational system” OR “conversational AI” OR “service chatbot” OR “customer service chatbot” OR “intelligent agent” OR “digital assistant” OR “voice assistant” OR “embodied conversational agent” OR “ECA” OR “dialogue system”) |
| Category 2: Consumer and psychology | (“consumer” OR “user” OR “customer” OR “experience” OR “attitude” OR “emotion” OR “interaction” OR “conversation” OR “acceptation” OR “adoption” OR “technology adoption” OR “use intention” OR “behavioral intention” OR “continuance intention” OR “post-adoption” OR “continued use” OR “customer satisfaction” OR “user engagement” OR “customer engagement” OR “service experience”) |
| Category 3: Consumer behavior and technology adoption theories | (“theory of planned behavior” OR “TPB” OR “TAM” OR “technology acceptance model” OR “TTF” OR “task technology fit” OR “UTAUT” OR “theory of acceptance and usage of technology” OR “ITM” OR “initial trust model” OR “ELM” OR “elaboration likelihood model” OR “TRA” OR “theory of reasoned action” OR “IDT” OR “innovation diffusion theory” OR “SOR” OR “stimulus organism-response” OR “expectation-confirmation model” OR “anthropomorphism” OR “theory-of-mind” OR “U&G model” OR “users and gratification model” OR “trust-commitment theory” OR “consumer acceptance model” OR “diffusion of innovation theory” OR “technology readiness” OR “emotion” OR “emotional response” OR “affect” OR “affective response” OR “social presence” OR “perceived warmth” OR “perceived intelligence” OR “humor” OR “perceived humor” OR “empathy” OR “perceived empathy” OR “anthropomorphism” OR “human-likeness” OR “humanlike” OR “mind perception” OR “uncanny valley” OR “parasocial interaction” OR “symbolic interaction” OR “emotional attachment” OR “relational warmth”) |
| Journals | Frequency | Impact Factor (2025) | Subject Area |
|---|---|---|---|
| International Journal of Human–Computer Interaction | 21 | 4.7 | Management Information System |
| Computers in Human Behavior | 13 | 9.9 | Management Information System |
| Journal of Theoretical and Applied Electronic Commerce Research | 11 | 4.6 | Management Information System |
| Journal of Business Research | 6 | 11.3 | Marketing |
| Journal of Retailing and Consumer Services | 5 | 10.4 | Marketing |
| Education and Information Technologies | 4 | 5.4 | Information Technology |
| European Journal of Marketing | 3 | 5.2 | Marketing |
| Journal of Service Management | 3 | 11.8 | Management |
| Electronic Markets | 2 | 8.5 | Management Information System |
| International Journal of Bank Marketing | 2 | 6.7 | Marketing |
| Psychology and Marketing | 2 | 4.9 | Marketing |
| Other journals | 54 | Many Subjects Areas |
| Rank | Study | Journal (Impact Factor) | ABI/Inform Global, Business Source Premier, and Web of Science (Citations) | Type of Study | Keywords |
|---|---|---|---|---|---|
| 1. | [54] | Electronic Markets (8.5) | 1723 | Quantitative | Artificial intelligence, chatbot, anthropomorphism, social presence, compliance, customer service |
| 2. | [1] | Marketing Science (5.4) | 1637 | Quantitative | Artificial intelligence, chatbot, conversational commerce, new technology |
| 3. | [55] | Computers in Human Behavior (9.9) | 1571 | Quantitative | Disembodied conversational agents, chatbots, service encounters, social presence, anthropomorphism |
| 4. | [56] | Journal of Business Research (11.3) | 1382 | Quantitative | Chatbot, communication, digital marketing, luxury brand, service agents |
| 5. | [57] | Computers in Human Behavior (9.9) | 1371 | Quantitative | CMC, instant messaging, IM chatbot, cleverbot |
| 6. | [58] | Computers in Human Behavior (9.9) | 1259 | Quantitative | Online chat agents, message interactivity, identity cue, anthropomorphic visual cue, compensation effect |
| 7. | [7] | Digital Health (4.7) | 941 | Qualitative | Acceptability, AI, artificial intelligence, bot, chatbot |
| 8. | [4] | Future Generation Computer Systems (7.3) | 935 | Quantitative | Human–computer interactions, chatbots, affective computing, psychophysiology |
| 9. | [59] | Computers in Human Behavior (9.9) | 623 | Quantitative | CMC, interest, novelty effect, education, technology |
| 10. | [60] | Computers in Human Behavior (9.9) | 491 | Quantitative | Human–machine communication, computers are social actors’ paradigm, cognitive-affective processing system, artificial intelligence, chatbot |
| Sample Sizes | Number |
|---|---|
| [0 to 150] | 32 |
| [151 to 300] | 57 |
| [301 to 500] | 49 |
| [501 to 800] | 35 |
| [801 to 1000] | 4 |
| Sample Sizes | Number |
|---|---|
| [0 to 20] | 10 |
| [21 to 40] | 11 |
| [41 to 60] | 4 |
| [61 to 100] | 2 |
| [101 to 200] | 1 |
| Country | Frequency | Country | Frequency | Country | Frequency |
|---|---|---|---|---|---|
| China | 35 | Italy | 5 | Norway | 2 |
| Multi-Countries | 25 | Saudi Arabia | 4 | Lebanon | 2 |
| USA | 20 | Australia | 4 | Germany | 2 |
| India | 18 | Portugal | 4 | Thailand | 2 |
| UK | 13 | Spain | 3 | Vietnam | 2 |
| Korea | 10 | Croatia | 3 | Turkey | 2 |
| Holland | 9 | Sri Lanka | 3 | Pakistan | 2 |
| Malaysia | 6 | Canada | 3 | Singapore | 2 |
| Theories | Frequencies | Theories | Frequencies |
|---|---|---|---|
| Anthropomorphism | 59 | Social Exchange Theory | 1 |
| Technology-acceptance Model (TAM) | 49 | sRAM | 1 |
| Unified theory of acceptance and use of technology (including META-UTAUT and UTAUT2) | 26 | Social Representation Theory | 1 |
| Social-presence theory | 11 | Social Penetration Theory | 1 |
| Expectation-confirmation Model | 7 | Social Impact Theory | 1 |
| SOR | 7 | Similarity attraction theory | 1 |
| CASA | 6 | Human–Machine Communication Theory | 1 |
| U&G Model | 4 | UX honeycomb model | 1 |
| Information Systems Success Model (ISS) | 3 | Theory of Mind | 1 |
| AIDUA | 3 | HVL Model | 1 |
| Language-based models | 3 | Expectancy Violation Theory | 1 |
| Trust-commitment Theory | 3 | Theory of Perceived Value | 1 |
| SERVQUAL | 3 | Elaboration-Likelihood Model | 1 |
| Consumer-acceptance Model | 2 | Elaboration-Likelihood Model | 1 |
| Diffusion of Innovation Theory (DOI) | 2 | SEEK Model | 1 |
| Human–Computer Interaction | 2 | Task-fit Technology | 1 |
| Affordance Theory | 2 | ||
| Human–Computer Interaction | 2 | Attachment Theory | 1 |
| Trust Theory | 2 | Behavioral Reasoning Theory | 1 |
| Heuristic Systematic Model | 2 | Technology Affordance Constraints Theory | 1 |
| ICT Adoption Framework | 1 | Customer Experience Theory | 1 |
| Cognitive Load Theory | 1 | Trust-Risk Dual Pathway | 1 |
| Social Identity Theory | 1 | Self-determination Theory | 1 |
| Dual Factor Theory | 1 | Social Cognition Theory | 1 |
| Customer Experience Theory | 1 | Decomposed Theory of Planned Behavior | 1 |
| Stimulus-Organism-Response (SOR) | 1 | Stress Coping Theory | 1 |
| Theory of Planned Behavior | 1 | Parasocial Relationship Theory | 1 |
| Antecedent | Dependent Variable | Number of Empirical Studies | Effects Significance | Study |
|---|---|---|---|---|
| Perceived anthropomorphic cues and perceived empathy of chatbot | Acceptance towards chatbot | 1 | 1 | [88] |
| Anthropomorphic cues | N/A (2) | Review paper | [90] | |
| Anthropomorphic cues (mind perception) | Intention of using chatbot (via closeness) | 1 | 1 | [91] |
| Anthropomorphic cues | Satisfaction with chatbot | 1 | 0 (NS) (1) | [70] |
| Anthropomorphic cues | Intention of using chatbot | 1 | 0 (NS) (1) | [9] |
| Anthropomorphic cues | Compliance with chatbot request | 1 | 1 | [54] |
| Social Presence | Perceived humanness | 1 | 1 | [89] |
| Anthropomorphic cues and perceived enjoyment | Acceptance towards chatbot | 1 | 1 | [32] |
| Anthropomorphic cues | Chatbot adoption intention | 1 | 1 | [6] |
| Visual Anthropomorphic Cues | Chatbot adoption intention | 1 | 1 | [58] |
| Chatbot Personality (tailored to consumer’s personality) | Chatbot adoption | 1 | 1 | [92] |
| Anthropomorphic cues | Trust in chatbots | 1 | 1 | [71] |
| Anthropomorphic creepiness-like traits | Loyalty to chatbots | 1 | 1 | [93] |
| Anthropomorphic creepiness-like traits and perceived technological risk | Privacy concerns when interacting with chatbot | 1 | 1 | [45] |
| Anthropomorphic cues and social presence | Customer experience with chatbot | 1 | 1 | [84] |
| Anthropomorphic-like traits that emulate a friendly personality | Acceptance towards chatbots | 1 | 1 | [94] |
| Anthropomorphic cues | Acceptance towards chatbots | 1 | 1 | [76] |
| Anthropomorphic cues | Chatbot usage intention | 1 | 1 | [77] |
| Anthropomorphic cues | Emotional connection towards the company | 1 | 1 | [55] |
| Anthropomorphic cues | Chatbot adoption | 1 | 1 | [95] |
| Anthropomorphic cues | Chatbot continuation intention | 1 | 1 | [78] |
| Anthropomorphic cues | Acceptance towards chatbot’s message | 1 | 1 | [96] |
| Anthropomorphic cues | Chatbot perceived competence | 1 | 1 | [70] |
| Anthropomorphic cues | Utilitarian attitude to use chatbots | 1 | 1 | [2] |
| Anthropomorphic cues | Attitude towards chatbot | 1 | 1 | [97] |
| Predictor | Empirical Evidence in Reviewed Studies | Robustness Classification | Typical Countries Studied | Typical Contexts Where Effects Are Stronger | Typical Moderating Conditions |
|---|---|---|---|---|---|
| Perceived usefulness | 39 empirical studies; all significant | Robust predictor | China, USA, Germany and UK | Across sectors (tourism, retail, finance, education) | Strongest during pre-adoption evaluation stages |
| Perceived ease of use | 39 empirical studies; 38 significant | Robust predictor | China, India and UK | E-commerce and service chatbot contexts | Stronger for inexperienced users and student samples |
| Performance expectancy | 22 studies; 21 significant | Robust predictor | China, Korea and USA | Digital service contexts and transactional chatbots | More important in utilitarian task-oriented interactions |
| Trust in chatbot | 13 studies; all significant | Robust predictor | China, Korea and USA | Banking, insurance, financial services | Stronger in high-risk service contexts |
| Anthropomorphism | 59 studies; 54 significant, 5 non-significant | Context-dependent predictor | China, UK, USA and India | Retail, hospitality, conversational commerce | Effects vary by chatbot design (text vs. voice) and degree of human-likeness |
| Emotional engagement/enjoyment | 21 studies; 19 significant | Context-dependent predictor | UK | Hedonic service contexts | Stronger for younger consumers (Gen Z) |
| Privacy concerns/perceived risk | 13 studies; all significant negative effects | Context-dependent barrier | China, USA and Europe | Finance, healthcare, insurance | Stronger when personal data disclosure is required |
| Technological anxiety | 4 studies; mixed results | Context-dependent barrier | India and China | Low technology readiness populations | Moderates relationships between chatbot quality and satisfaction |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Latulippe, J.-M.; Ladhari, R. Chatbot Adoption: A Systematic Literature Review. J. Theor. Appl. Electron. Commer. Res. 2026, 21, 98. https://doi.org/10.3390/jtaer21040098
Latulippe J-M, Ladhari R. Chatbot Adoption: A Systematic Literature Review. Journal of Theoretical and Applied Electronic Commerce Research. 2026; 21(4):98. https://doi.org/10.3390/jtaer21040098
Chicago/Turabian StyleLatulippe, Jean-Michel, and Riadh Ladhari. 2026. "Chatbot Adoption: A Systematic Literature Review" Journal of Theoretical and Applied Electronic Commerce Research 21, no. 4: 98. https://doi.org/10.3390/jtaer21040098
APA StyleLatulippe, J.-M., & Ladhari, R. (2026). Chatbot Adoption: A Systematic Literature Review. Journal of Theoretical and Applied Electronic Commerce Research, 21(4), 98. https://doi.org/10.3390/jtaer21040098
