Next Article in Journal
Does Participation Intention Equal Participation Behavior? The Role of Dynamic Competition in Crowdsourcing Contests
Previous Article in Journal
AI-Powered Customer Service in Online Retail: Product-Type Differences, Information Asymmetry, and Seller Interventions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Chatbot Adoption: A Systematic Literature Review

by
Jean-Michel Latulippe
1,* and
Riadh Ladhari
2
1
Department of Administration, Faculty of Administration, Université de Moncton, Moncton, NB E1A 3E9, Canada
2
Department of Marketing, Laval University, Quebec, QC G1V 0A6, Canada
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2026, 21(4), 98; https://doi.org/10.3390/jtaer21040098
Submission received: 15 December 2025 / Revised: 11 March 2026 / Accepted: 17 March 2026 / Published: 24 March 2026

Abstract

Chatbots have spurred keen academic interest in the last decade, with researchers focusing primarily on the factors impacting chatbot adoption by consumers. Based on a systematic literature review (SLR) of 202 selected peer-reviewed papers published from 2015 to 2025, this article aims to achieve the following objectives: (1) describe publication trends, including relevant journals and highly-cited papers; (2) analyze the methodological approaches and theoretical models employed in the field; (3) investigate the drivers and barriers to consumers’ adoption of chatbots; and (4) outline directions for future research. Among the findings, the review reveals that perceived usefulness, perceived ease of use, anthropomorphic characteristics, consumer trust in chatbot applications, and the ability of chatbots to emulate a human-like personality are key drivers of chatbot adoption, whereas perceived risks and anxiety are the most reported barriers. This study offers several future research avenues and highlights the importance of considering the role of emotions and personality traits.

1. Introduction

Since 2015, chatbots have ranked consistently as one of the leading technological advances [1,2,3]. They represent the type of human–computer interaction (HCI) most readily utilized by consumers in their relationships with businesses in a mobile application context [4,5,6]. They are natural language processing (NLC) systems which act as virtual assistants capable of mimicking human interactions [7]. Chatbots provide interactions with consumers through text or voice [8]. In their capacity of virtual assistants, they answer consumers’ questions about products and services [9,10,11,12,13,14,15,16]. Chatbots enable users to ask questions and receive answers quickly and easily [13]. They provide a swift and convenient mode of interaction for consumers [10,17,18,19,20]. In turn, chatbots support consumers in their choices and decision-making processes across various service and purchase contexts [21,22,23,24,25,26,27,28,29,30,31], while alleviating the need to schedule appointments with salespersons [13,14]. Despite the obvious benefits, businesses across various sectors of industry (e.g., hospitality, online retail, and airlines) face continued consumer resistance to the use of chatbots [10,18]. Indeed, only about one third of Canadian and American consumers feel comfortable communicating with conversational agents [3]. In Europe, fewer than 50% of individuals report previous interactions with brand-specific chatbots [32]. Research on chatbot adoption is primarily grounded in established technology acceptance frameworks, most notably the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT), which explain adoption through perceived usefulness, perceived ease of use, performance expectancy, and effort expectancy [3,16,19]. Complementing these utilitarian perspectives, the anthropomorphism theory emphasizes the attribution of human-like qualities to chatbots [20,21].
Given the critical need to understand the drivers and barriers of consumer adoption of chatbots across the purchase journey, and the ever-growing number of studies in this domain, a systematic literature review is essential to determine the factors shaping consumer adoption. A SLR involves synthesizing existing knowledge in a reliable manner through the adoption of a reproducible, scientific, and transparent process supported by rigorous methodology designed to minimize bias [33,34,35]. For this review, several conceptual boundaries are specified. First, the focus is on consumer adoption of chatbots, rather than employee or organizational adoption, as the latter follows distinct decision processes and evaluative criteria. Second, the review primarily examines task-oriented and service chatbots used to support consumer decision-making and service interactions, while purely social or companionship chatbots are considered only when they inform adoption mechanisms. Third, both text-based and voice-based chatbots are included, as these modalities represent the dominant consumer-facing interfaces in prior research. Finally, the term “AI chatbots” encompasses both traditional rule-based or machine-learning–driven systems and more recent generative AI–enabled chatbots, provided that the study examines adoption-related outcomes from the consumer’s perspective.
Several SLRs have examined chatbots from various perspectives. However, our study differs in terms of its research objective. While Ref. [35] identified recurrent themes in chatbot research over the past decade, focusing on user satisfaction, engagement, and trust, this review specifically examines the factors influencing consumers’ decisions to adopt or reject chatbots. Similarly, Ref. [36] investigated the effects of chatbots on customer loyalty by emphasizing system quality, service quality, and information quality, whereas the present study aims to synthesize the drivers and barriers of chatbot adoption regardless of industry context. In contrast, Ref. [37] focused on the adoption of AI technologies in service delivery more broadly, with particular attention to the hospitality and tourism sector. The present study instead concentrates on chatbot adoption across multiple industries. Other reviews have adopted more focused scopes. For instance, Refs. [38,39] examined consumers’ intention of using intelligent conversational agents, while Ref. [10] provided a general overview of chatbot research without explicitly addressing adoption. More recently, Ref. [40] analyzed consumer responses to anthropomorphism in text-based AI chatbots, thereby focusing on a specific adoption mechanism. Additionally, Ref. [41] examined chatbot usage in learning contexts and Ref. [42] focused on acceptance drivers within the higher education sector. Overall, prior studies tend to focus on single outcomes (e.g., satisfaction, loyalty, or anthropomorphism), or specific contexts (e.g., hospitality, education), time coverage, specific contexts (e.g., hospitality or education), or methodological approaches (e.g., narrative synthesis, meta-analytic effects, or SLR) [35,36]. By contrast, the present SLR provides a comprehensive synthesis of both drivers and barriers to chatbot adoption across industries, thereby offering a broader and more integrative perspective on consumer adoption of chatbot technologies (see Appendix A).
This study proposes an integrative analysis of the growing body of research into the chatbot adoption phenomenon. It (1) describes publication trends, including relevant journals and highly-cited papers; (2) analyzes the methodological approaches and theoretical models used in the field; (3) investigates the drivers and barriers to consumers’ adoption of chatbots; and (4) outlines directions for future research.
The phenomenon of chatbot adoption emerges as a multidimensional process shaped by both utilitarian evaluations and social–relational perceptions. While technology acceptance frameworks such as TAM and UTAUT explain adoption through cognitive assessments of usefulness, ease of use, and performance expectancy, anthropomorphism and other constructs (e.g., social presence, empathy, and perceived humanness) extend these models by capturing the relational and emotional mechanisms underlying human–AI interaction. This manuscript integrates these complementary theoretical lenses to systematically examine how functional attributes, anthropomorphic cues, and affective responses jointly drive or inhibit consumer chatbot adoption.
This manuscript is structured into eight sections, commencing with the present introduction. Section 2 defines the chatbot concept, traces its evolution, reviews its main categories, and provides a brief outline of the theoretical background related to its adoption and continued use. Section 3 outlines the methodology employed for selecting the relevant manuscripts for the SLR. Section 4, which is descriptive, provides an overview of publication characteristics and methodological approaches. Section 5 discusses the drivers and barriers of consumer chatbot adoption, as well as the theoretical frameworks and constructs examined in the selected studies. Section 6 proposes directions for future research. Section 7 discusses the study limitations. Finally, Section 8 comprises a conclusion for this study.

2. Conceptual Background

To provide a structured understanding of chatbot adoption, this section first clarifies the concept and evolution of chatbots, then reviews their main categories, and finally briefly outlines the theoretical foundations used to explain consumer adoption.

2.1. Definition and Evolution of Chatbots

The term chatbot derives from a combination of ‘chat’ (conversation) and ‘bot’ (short term for robot) [3,4]. The concept sometimes appears in the literature as chatterbot or talkbot [18]. Ref. [3] assert that chatbots represent a common type of virtual assistant and represent a subcategory within the broader class of conversational agents. Most studies define chatbots as “any software application that engages in dialogue with a human using natural language” [3] (p. 76). This definition appears to be the most prevalent in the literature and is therefore adopted for the purpose of this study.
The evolution of chatbots can be divided into three major phases. The early development phase (1900 to 1966) traces the earliest development of chatbots, with text-based chatbots beginning to emerge in the 1960s. A notable milestone at the end of this period is the creation of ‘ELIZA’ by Joseph Weizenbaum in 1966, a chatbot capable of transforming user sentences into questions [3,35]. The second phase, from 1967 to 1995, was marked by the introduction of more advanced and sophisticated chatbots with a distinct personality [43]. For instance, the chatbot ‘PERRY’ was described as having its own personality; however, its language understanding capabilities remain limited [43]. The integration of AI into chatbots started in 1988. However, the first online chatbot, as understood in contemporary terms, was launched in 1995. Finally, the third phase (post-1995) corresponds to modern chatbots as we are accustomed to today which are capable of assisting customers accomplishing day-to-day tasks such as retrieving information from databases [43].
Nowadays, chatbots are considered as computer programs proficient in understanding human languages, using NLP or AI Markup Languages. They use a knowledge base with dialogue management rules, employing various techniques for processing user input (i.e., IBM’s Watson) [10]. Chatbots are used across various industries, such as hospitality, tourism, and retailing, where they are increasingly employed as substitutes for human employees to improve customer assistance [44]. It is predicted that by 2026, they will make up about 95% of online customer service interactions [44].

2.2. Categories of Chatbots

The literature reveals a broad spectrum of chatbot categories [10,43]. First, chatbots can be classified based on the communication channel they employ, which can be text, voice, image, or a combination of these modalities [43]. Text-based chatbots (e.g., Messenger Service Kik) allow users to communicate with a machine through text [3]. However, improved recognition of human speech has led users to also communicate with the machine through speech (e.g., WhatsApp, Facebook Messenger, and Siri or Google Voice Message Assistant). Advanced chatbots can even interpret images, not only recognizing objects, but also expressing opinions and emotions, making them increasingly akin to genuine human companions [45].
Second, chatbots are categorized by their overall knowledge domain [43]. Chatbots capable of answering user queries from various domains are commonly referred to as “Generic chatbots,” and they find applications across various industries [10]. An illustrative example is AskWiz. Conversely, Interpersonal chatbots and Intrapersonal chatbots are domain-specific chatbots. Interpersonal chatbots offer specific services to consumers, such as booking services in restaurants and airlines, or providing answers to frequently asked questions. However, they do not prioritize forming a friendly, companionable interaction [43,46]. Intrapersonal chatbots prioritize forming a friendly, companionable interaction and are designed to closely understand an individual’s day-to-day needs [43,46].
Third, chatbots can further be categorized based on the goals they seek to accomplish, falling into informative, chat-based/conversational, and task-based chatbot categories [10,43,46]. For instance, task-based chatbots excel in performing various functions, including tasks like room reservations and skillfully acquiring information, and providing users with relevant responses in service sectors, such as the hospitality and tourism industry [43]. Regarding their operation, chatbots can be classified as human-mediated, where human computation plays a role, or fully autonomous, with potential limitations that necessitate ongoing staff intervention to enhance their intelligence [43,44]. It is worth noting, however, that human computation may suffer from slower information processing and struggle to manage a high volume of user requests. Lastly, chatbots can be categorized based on the permissions granted by their development platforms, falling into the categories of open-source or commercial [43].

2.3. Theoretical Background of Chatbot Adoption

Research on chatbot adoption is primarily grounded in information systems and consumer behavior theories, with five dominant theoretical frameworks shaping the literature. First, technology acceptance-based models, notably the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT), conceptualize adoption as a function of cognitive evaluations, such as perceived usefulness, perceived ease of use, performance expectancy, effort expectancy, and social influence. Across studies, these concepts are consistently found to positively influence consumers’ adoption intentions and continued use of chatbot technologies.
Second, the anthropomorphism theory emphasizes the role of human-like cues, empathy, and chatbot personality in shaping consumer responses. This stream of research generally reports positive effects of anthropomorphic features on trust, social presence, satisfaction, and adoption. However, excessive anthropomorphism can generate perceptions of creepiness and resistance, reflecting the nonlinear nature of human-likeliness in human-AI interactions.
Third, trust-based models position trust as a central construct in chatbot adoption, operating as a direct predictor, mediator, or outcome of adoption-related processes. These models highlight the importance of reliability, response accuracy, data protection, and credibility in enhancing consumer confidence and acceptance.
Closely related, the social presence theory underscores the importance of users’ perceptions of being with a social identity during interactions, showing that feeling “socially present” enhances trust, engagement, and adoption outcomes.
Finally, affective and emotion-based perspectives extend beyond purely cognitive explanations by demonstrating that hedonic motivation, enjoyment, empathy, and emotional responses significantly shape adoption and continuance, particularly in hedonic, relational, or service-oriented contexts. In parallel, research consistently identifies barriers to adoption, including perceived risk, privacy concerns, technological anxiety, and creepiness, all of which negatively affect adoption intentions and loyalty.

3. Methodology

The methodology followed the PRISMA protocol to report the findings (see Figure 1) which contains three main phases of (1) establishing exclusion criteria for the study, (2) assessing the existing literature, and (3) reporting the findings [47]. This SLR adheres to a comprehensive process for selecting articles related to chatbot adoption. It ensures bias reduction in both selection and analysis, contributing to the robustness of conclusions within the research field [47]. The chosen SLR approach aligns with the study’s objective of presenting a coherent overview and an integrative framework for research on chatbot adoption.

3.1. Source Types and Source Quality

The sources kept for this study consist exclusively of peer-reviewed journal articles. Specifically, conceptual articles, literature reviews, and empirical studies, both quantitative and qualitative, published in English-language peer-reviewed journals, were selected and included. Books, book chapters, and conferences were not selected due to the fact that they have not been subjected to the same rigor [48,49]. Indeed, non-peer-reviewed sources are not considered in the identification of manuscripts [49].
Systematic computerized searches of scientific articles were undertaken using three multidisciplinary bibliographic databases widely recognized in SLR: ABI/Inform Global, Business Source Premier, and Web of Science. These databases are complementary and collectively ensure comprehensive coverage of the relevant literature [36,48]. ABI/Inform Global was selected for its extensive coverage of peer-reviewed journals in business, management, marketing, and information systems, making it particularly well suited for capturing both foundational and applied research in these domains. The database is especially appropriate for capturing research situated at the intersection of technology adoption, consumer behavior, and electronic commerce, thereby ensuring comprehensive coverage of studies examining chatbot adoption in organizational and market contexts. Business Source Premier complements this coverage by providing access to a broad range of high-quality academic journals and practitioner-oriented publications, thereby increasing the likelihood of identifying relevant theoretical and empirical contributions. Finally, Web of Science was included to ensure multidisciplinary coverage and citation-based rigor, as it indexes high-impact journals across the social sciences and offers robust citation tracking capabilities. The combined use of these databases enhances both the breadth and depth of the literature search while reducing the risk of omitting influential and highly-cited studies.

3.2. Period, Search Mechanism, and Keywords

The SLR covers a period from January 2015, the year of the first relevant publication, to December 2025. Since research in the field continues to evolve rapidly, articles published online ahead of print were also included. The literature search was conducted directly within the three selected databases. Following the identification of relevant articles, duplicate records were manually removed from the dataset [48].
The search strategy relied on three main categories of keywords. The first category contained terms related to chatbots and their variants, such as chatbot, virtual assistant, chatterbot, talkbot, conversational agent, natural language interface. These terms were selected based on prior reviews of the chatbot literature (e.g., [35,36]. The second category included keywords related to consumers (e.g., consumer, user, customer, etc.) as well as variables capturing psychological factors related to technology adoption, comprising trust in chatbots, anxiety associated with use, and concepts such as understanding and conversation. Prior research substantiates the importance of these psychological factors in shaping consumer adoption of technology, including chatbots [50,51]. Moreover, keywords such as understanding and conversation are often associated with interactions with artificial intelligence. The third category comprises keywords related to models and theories commonly used in research on consumer behavior and technology adoption, such as the Theory of Reasoned Action (TRA) and the Technology Adoption Model (TAM). Search strings include all the keywords reports in Table 1.
The systematic search was conducted separately with search strings and inclusion criteria (see Appendix B). Searches were executed in ABI/Inform Global, Business Source Premier, and Web of Science Core Collection on 10 January 2026. In ABI/Inform Global and Business Source Premier, the search was applied to article titles, abstracts, and author-supplied keywords, while in Web of Science, it was applied to all available fields (i.e., title, abstract, and keywords). Across all databases, results were restricted to peer-reviewed journal articles, published in English between January 2015 and December 2025. No subject-area filters were imposed to avoid excluding interdisciplinary research spanning marketing, information systems, human–computer interaction, and consumer behavior. Document types were limited to research articles, conceptual papers, and review articles. Books, book chapters, conference proceedings, dissertations, and non-peer-reviewed sources were excluded to ensure methodological rigor and comparability across databases.

3.3. Broad First-Pass Recall and Staged Screening Strategy

Consistent with best practices for SLR, the search strategy was deliberately designed to maximize results during the initial retrieval phase, followed by progressive refinement during subsequent screening and full-text assessment. Accordingly, first-pass database queries employed a broad set of chatbot-related, consumer-related, and theory-related keywords, including generic terms (e.g., chatbot, virtual assistant, conversational agent, consumer, user, interaction, experience) as well as widely used technology adoption frameworks (e.g., TAM, UTAUT, TPB, anthropomorphism). This inclusive approach ensured that potentially relevant studies were not prematurely excluded at the search stage. Precision was subsequently increased through a multi-step screening process involving title and abstract review, followed by rigorous full-text assessment. During this process, articles that did not explicitly address consumer adoption, acceptance, usage intention, or continuance of chatbots’ use were excluded. This staged recall–precision strategy aligns with PRISMA recommendations and helps minimize the risk of false negatives while maintaining conceptual focus in the final sample.
During the title and abstract screening stage, inclusion decisions were not based solely on the explicit presence of predefined theoretical labels. Because terminology in this research domain varies across disciplines, records were assessed primarily for their conceptual relevance to the focal phenomenon rather than strict keyword matching. When titles or abstracts suggested potential relevance such as studies examining related adoption mechanisms, user perceptions, or interaction characteristics even if the expected theoretical terms were not explicitly stated, the authors retained the records for full-text evaluation. In cases of uncertainty, records were conservatively carried forward to the full-text screening stage. Therefore, this approach ensured that conceptually relevant studies were not excluded prematurely due to variations in terminology across disciplines.
The number of articles returned from the search is 3100 articles: 380 articles from ABI/Inform Global, 474 from Business Source Premier, and 2246 from Web of Science.

3.4. Purification

The purification process included four steps (Figure 1). During the first step, 1140 duplicated articles were identified and eliminated. In the second step, all 1960 remaining articles underwent a twofold screening process, with initial inclusion based on article title and abstract. If, upon examination, the main domain terms such as chatbot, conversational agents, customers, TAM (technology acceptance model), TTF (task technology fit), UTAUT (unified theory of acceptance and usage of technology), ITM (initial trust model), ELM (elaboration likelihood model), TRA (theory of reasoned action), IDT (innovation diffusion theory), stimulus organism-response, expectation-confirmation model, anthropomorphism, theory-of-mind, U&G Model (users and gratification model), trust-commitment theory, consumer acceptance model, diffusion of the innovation theory, and respective homonyms did not appear in an article’s title or abstract, the article was excluded. The initial screening ultimately resulted in the exclusion of 1680 articles that failed to meet the predetermined inclusion criteria. The second component of the selection process involved a rigorous reading of article content and led to the elimination of 135 articles found to not comply with inclusion criteria. The excluded articles either made no mention of chatbot adoption factors or only superficially mentioned the term ‘adoption’. Upon completion of this second step, 145 articles were retained.
To assess the robustness of the screening process, a random sample of 50 excluded records (≈3% of all excluded articles) was independently re-evaluated by the two authors. This validation identified one false negative (2%). This one false negative could have been easily identified at Stage 3, as it was published in Computers in Human Behavior. Overall, the low false-negative rate indicates that the screening procedure was reliable and unlikely to have systematically excluded relevant studies, particularly given the possibility of recovering additional articles during the following step, through manual searches.
During the third step, a manual search of the journals with the greatest number of published articles (e.g., Computers in Human Behavior and Journal of Business Research) generated 55 additional articles. During the fourth and final step, a search of the references of each selected article, as suggested by [49,52,53], yielded 2 additional articles. The final number of selected articles was 202.

3.5. Coding

This SLR employed content analysis, the prevailing method in SLR [47]. Two researchers conducted the coding (see Appendix C). We decided to opt for content analysis, a usual method used when conducting systematic reviews [47]. No software was used for the content analysis.
To enhance the reliability of the coding process, the selected articles were coded independently by the two researchers using the predefined coding scheme presented in Appendix C. Following the initial coding stage, the coders compared their coding results and discussed any discrepancies. Differences in classification were resolved through discussion and iterative review of the relevant articles until consensus was reached. In cases where uncertainty remained, the coders jointly revisited the article to ensure consistent interpretation of the coding categories. This reconciliation procedure helped ensure consistency in the application of the coding framework and enhanced the transparency and reproducibility of the content-analysis process.

4. Findings

This section maps the literature relating to consumer adoption of chatbots by means of a descriptive analysis deemed essential for identifying trends in topical literature. The analysis provides data on study characteristics (e.g., year of publication, academic journals, and most-cited articles), methodological approaches, and theoretical foundations (theories/models, variables). This preliminary step facilitates an understanding of the nature of the research field and identifies gaps deserving further research.

4.1. Publication Trends, Journals, and Citations

4.1.1. Publication Trends

Figure 2 depicts the patterns in the publications from the beginning of 2015 to the conclusion of 2025 and follows a discrete data trend, as the underlying data are discrete, meaning they take countable, separate values.
The publication curve over time points to the years 2020, 2021, 2022, 2023, 2024, and 2025 as having the highest number of publications, namely 16, 14, 21, 13, 38, and 83 articles, respectively. Articles published in the last five years account for 83,66 per cent (169 out of 202) of the selected manuscripts. The introduction of AI chatbots in 2022 surely contributed to a surge in interest in recent years about this topic.

4.1.2. Academic Journals

Research findings evidence that most articles retained relate to technology and information systems management (n = 38; 18.81% out of a total of 202), with 18 articles (n = 18; 8.91% out of a total of 202) published in a management journal. Table 2 below shows that the International Journal of Human–Computer Interaction published the greatest number of articles on chatbot adoption factors (n = 21; 10.40% out of a total of 202), followed by Computers in Human Behavior (n = 13; 6.44% out of a total of 202), followed by the Journal of Theoretical and Applied Electronic Commerce Research (n = 11; 5.45% out of a total of 202), and the Journal of Business Research (n = 6; 2.97% out of a total of 202).

4.1.3. Top Cited-Articles

Table 3 highlights that papers on chatbot adoption boast high citation counts in ABI/Inform Global, Business Source Premier, and Web of Science (31 December 2025). The leading article focusing on chatbot adoption is the study by [54] cited 1723 times until December 2025, followed by articles written by [1] with 1637, [55] with 1571, and [56] with 1382 citations respectively. Among the top ten, five are published in Computer in Human Behavior.
Ref. [54] empirically examine, through a randomized online experiment, how chatbot verbal anthropomorphic design affects user request compliance. They showed that both anthropomorphism, as well as the need to stay consistent, significantly increase the likelihood that users comply with a chatbot’s request for service feedback. Most studies were among the earliest published in the field (2015 and 2017) and addressed timely and insightful issues, which likely explains their higher citation counts. For instance, Ref. [57] were among the first to explore how users’ explicit and implicit expectations of human language are transferred and expressed in human–computer interaction, reporting that humans communicate differently when they know their conversational partner is a computer rather than another human. Their study reported that people sent more messages, but wrote fewer words per message when interacting with chatbots, suggesting that participants adapted their communication style to mirror the chatbot’s responses. Moreover, Ref. [58]’s study focused on how to humanize chatbots more than they are. Most researchers were geared towards determining if more anthropomorphic cues were needed to increase customers adoption. However, their study proved the opposite. They reported that a high level of interactivity between the chatbot and the customer could compensate for the lack of anthropomorphic visual cues. One of the reasons why this study might be cited a lot is that it steered research in a new direction.

4.2. Methodologies

4.2.1. Methods

To facilitate the interpretation of the figures presented in Figure 3, it is important to note that the final sample consists of 202 articles. Among these, 20 correspond to different types of literature reviews, including 3 traditional literature reviews, 4 meta-analyses, and 13 SLRs. The remaining 182 (202 − 20) empirical articles are distributed as follows: 10 qualitative studies, 18 studies using a mixed-methods approach (qualitative and quantitative), and 154 studies relying exclusively on quantitative methods. Among the latter, five articles report two distinct quantitative empirical studies, conducted in different sectors or national contexts. To account for this methodological specificity, these studies were counted separately, bringing the total number of quantitative empirical studies to 177 (154 + 5 + 18) and the total number of qualitative empirical studies to 28 (10 + 18).
Consequently, the study findings indicate a greater emphasis on theory testing rather than theory building in chatbot research. Most quantitative research used surveys whereas in-depth interviews were mostly used for qualitative research.

4.2.2. Study Sectors

Figure 4 highlights the distribution of studies across various industries, revealing a significant focus on the hospitality and tourism sector (n = 24), as well as the e-commerce industry (n = 15). Additionally, a subset of studies (n = 43) explored multiple sectors simultaneously (e.g., hospitality and finance). Investigations were also carried out within the insurance and banking sector (n = 20). These findings underscore the importance of conducting studies across diverse settings to develop a more comprehensive understanding of the factors that either drive or hinder consumer adoption of chatbots. It is not surprising that a substantial number of studies were conducted in the hospitality and tourism sector since it has experienced rapid growth in the deployment of chatbot over recent years [44,61]. Moreover, several studies have focused on the banking sector, where chatbots have gained considerable visibility and acceptance due to their relatively early implementation in customer service interactions.

4.2.3. Study Samples

Table 4 and Table 5 summarize the sample sizes used across the reviewed studies, both quantitative and qualitative. In quantitative studies, sample sizes typically range from 200 to 450 respondents. Given the analytical techniques employed, most studies include at least 180 participants (e.g., [3,62,63]). Two articles provide additional results by reporting the number of interactions between participants and chatbots, with interaction counts of approximately 6200 and 57,000, respectively. Most studies employed convenience sampling techniques. However, it is worth noting that convenience samples can only be generalized to the accessible population from which they are drawn. While such convenience samples can have relatively high internal validity, their low external validity represents their primary limitation.
Among the qualitative studies (28), sample sizes range from 14 to 60 participants. For instance, Ref. [64] conducted interviews with 18 participants across 12 countries. Their research was not limited to a specific industry; however, all participants were required to be familiar with a social chatbot, Replika. In contrast, Ref. [14] examined managerial perceptions of chatbots in the financial sector by interviewing 14 managers responsible for chatbot and IT operations in the Korean financial industry. Similarly, Ref. [7] conducted a study in the United Kingdom involving 29 students to explore consumer perceptions of chatbots in healthcare, using semi-structured interviews. Finally, Ref. [18] investigated the chatbot use across 15 retail companies, aiming to understand the role of chatbots in retail organizations and how these technologies address industry-specific needs.

4.2.4. Geographic Locations

The findings reported in Table 6 indicate the undertaking of studies in diverse countries around the globe. Most have been carried out in the United States (n = 20), the United Kingdom (13), China (35), India (18), and Holland (9), and on a more modest scale, in Korea (n = 12), Italy (n = 5), Australia (4), Saudi Arabia (4), Canada (3), Croatia (3), Germany (2), Norway (2), Pakistan (2), Thailand (2), Turkey (2), and Lebanon (2). It is worth mentioning that several studies were completed in more than one country (25).
Most studies pertaining to chatbots were conducted in China, the United States, and the UK. This concentration likely reflects the widespread adoption of this technology and its increased use by companies across different sectors in these markets. More studies in other regions, particularly in Africa, would be interesting as consumers’ cultural preferences may differ and lead to different reactions towards chatbots. It is worth noting that a rather substantial proportion of studies are conducted in China and other Asian contexts. Though valuable, this geographic focus may limit cross-cultural generalizability, particularly given differing privacy norms, regulatory frameworks, and technology acceptance patterns.

4.2.5. Methods of Data Analysis

Most studies used a cross-sectional approach (154 out of 177) and one study used a longitudinal approach. Indeed, most studies used scenario-based experiments (21) compared to field experiments (1 study). The literature is immensely dominated by scenario-based experiments and cross-sectional self-report surveys. It is worth noting that while statistically rigorous, such designs limit causal inference and real-world generalizability. Field experiments, longitudinal designs, and behavioral usage data remain somewhat underutilized. As most studies rely on self-reported adoption intentions rather than observed behavior, it may inflate relationships among perceptual constructs and does not fully capture continuance or habitual usage patterns.
For the analysis method, this review reports that 102 out of 177 articles used the structural equation modelling (SEM) method, whereas variance analyses were used in 32 articles, linear regressions were employed in 26 articles, and moderate mediation analyses in 16. Considering that most studies are cross-sectional and based on surveys, it is not surprising that the most common methods of data analysis are SEM and variance analysis. Regarding the articles based on the qualitative method, the 28 articles used detailed coding and thematic analyses. Figure 5 below details the different methods of analysis. Therefore, our study reveals that more field experiments should be conducted.

4.3. Theoretical Considerations Explaining Chatbot Adoption

A broad range of theoretical models have been used to study chatbot adoption. Most studies (121 studies) based their research upon only one theoretical model. There are quite a few studies where the authors combine two or more theoretical models to attempt explaining chatbot adoption (e.g., [3,65,66]). Table 7 presents the theoretical models used to explain consumer adoption of chatbots. The top five theories used in this research, excluding systematic reviews, literature reviews, and meta-analyses, are: the Anthropomorphism theory (59 studies), the Technology acceptance model (TAM) and derivatives (49 studies), the Unified theory of acceptance and use of technology model (UTAUT) and derivatives (26 studies), and the social presence theory (11 studies). A range of additional theories has also been employed, albeit to a lesser extent.
Overall, chatbot adoption research reveals theoretical convergence rather than fragmentation. TAM and UTAUT provide the utilitarian foundation by explaining how functional value and effort shape adoption decisions, while anthropomorphism and social presence theories extend these models by introducing relational and social mechanisms that enhance trust and emotional engagement. Trust acts as a bridging construct, linking cognitive evaluations (e.g., usefulness, performance expectancy) with affective responses (e.g., enjoyment, empathy). However, partial inconsistencies emerge across contexts, whereas TAM and UTAUT variables are generally robust predictors, their effects weaken when privacy risk and technological anxiety are salient, particularly in high-risk contexts. Similarly, anthropomorphism mostly facilitates adoption, but may backfire through creepiness or uncanny valley effects when human-likeness is excessive. Taken together, the literature suggests that chatbot adoption is best understood through integrative models that combine utilitarian, social, and affective perspectives, with contextual factors and interaction timing, shaping whether these theoretical perspectives reinforce or offset one another.
At the theoretical background level of chatbot adoption studies, two main observations can be made. First, many studies rely on TAM and UTAUT-based frameworks, frequently reproducing similar models with minor contextual variations. Although these models robustly explain utilitarian determinants (e.g., usefulness, effort expectancy), they often treat social and affective constructs as peripheral extensions of chatbot adoption rather than as central mechanisms of its adoption. Thus, this may limit alternative explanatory paradigms, such as relational, emotional, or socio-technical perspectives. Second, many studies test direct effects without explicitly modeling moderators such as industry risk level, chatbot embodiment (text vs. voice), or regulatory environment. Therefore, additional moderation analyses are needed to explain when and why anthropomorphism or trust-based effects strengthen or weaken.
Thus, these limitations suggest that chatbot adoption research would benefit from stronger theoretical diversification, explicit modeling of contextual moderators, and greater methodological variety. Addressing these gaps would move the field beyond incremental extensions of existing acceptance frameworks, towards more robust and context-sensitive explanatory models.

5. Factors Impacting Consumer Adoption of Chatbots

5.1. Drivers to Consumer Adoption of Chatbots

The drivers of consumer adoption identified in Section 5.1 are derived from the content analysis of the studies retained in the review. Each study was coded for the theoretical framework employed, the adoption-related variables examined, and the direction and statistical significance of the reported relationships (see Appendix D). These variables were subsequently grouped into 5 higher-order conceptual categories: (1) Consumer trust in chatbots; (2) Attributes favouring consumer adoption of chatbots; (3) Anthropomorphic traits of chatbots and chatbots’ personality; (4) Emotions regarding chatbot adoption; and (5) Barriers to consumer adoption of chatbots.

5.1.1. Consumer Trust in Chatbots

Trust is defined as the “concerned parties’ confidence in each other’s credibility in an exchange relationship” [5] (p. 298). This SLR identifies 13 empirical studies, all of which report statistically significant findings that examine the role of trust as a mediating variable [21,67,68,69], an independent variable [37,62,70], or a dependent variable [50,71,72,73,74]. In addition, the role of trust has been examined from a qualitative perspective [7]. Research on trust in chatbot adoption is predominantly concentrated in the e-commerce, financial, and insurance sectors, with most studies conducted in China.
Several studies have examined the role of trust in shaping consumer responses to chatbots. For instance, Ref. [50] identified the essential factors influencing consumer trust in chatbots and emphasized the importance of providing accurate, prompt, and efficient responses. Indeed, Ref. [75] conceptualized trust in chatbots as a multidimensional construct, encompassing trust in functionality, reliability, and data protection, with trust in data protection revealing as important dimension of trust in chatbots.
Beyond functional aspects, several studies highlight the role of interactional and design futures in building trust. For example, Ref. [69] reported that chatbot interaction style (social-oriented vs. task-oriented) and visual cues (e.g., avatar presence or absence) influence social presence, which in turn affects trust, perceived enjoyment, and overall attitudes towards chatbots. In a related vein, Ref. [67] found that compatibility, perceived ease of use, and social influence significantly contribute to the development of initial trust in chatbots, while Ref. [71] showed that trust is shaped by chatbot characteristics such as credibility, competence, anthropomorphism, social presence, and informativeness.
In sum, these studies highlight the central role of trust in influencing consumer attitudes and adoption of chatbot. Furthermore, they emphasize the importance of designing chatbots to deliver accurate, reliable, and secure interactions, while also emulating social presence that enhances trust and facilitates successful adoption.

5.1.2. Attributes Favouring Consumer Adoption of Chatbots

This study identified many factors that have an impact on chatbot adoption, such as perceived ease of use, perceived usefulness, and performance expectations. As shown in Table 8, perceived ease of use positively influences consumer adoption of chatbots, as well as intentions of using, reusing, and continuing to use chatbot applications [3,63,65,66].
Perceived ease of use appears in 39 empirical studies, all but one reporting significant effects. Perceived usefulness appears in 39 empirical studies, all reporting significant effects. For instance, Ref. [76] show that perceived ease of use, perceived usefulness, perceived trust, perceived intelligence, and anthropomorphism jointly influence the adoption of AI-powered chatbots for travel planning. Across the literature, perceived usefulness is most often examined with adoption or usage intention as the dependent variable, while limited attention is given to satisfaction, actual usage, and continuance outcomes. The sectors most frequently found in this context are hospitality and tourism, followed by education, with a large proportion of studies conducted in China. In the study by [3], the objective was to pinpoint factors positively influencing the usage intention towards the “Emma” chatbot, a shopping assistant designed to support consumers during the pre-purchase phase of fashion product searches. Perceived usefulness was conceptualized as an extrinsic motivation, while expected performance improvement serves as a source of utilitarian gratification. In their study among German university students, selected for their technological proficiency, the authors found that both authenticity of conversation and perceived usefulness significantly enhance user acceptance of the chatbot. Taken together, the consistent influence of perceived usefulness across industries and countries reinforces its role as a core determinant of consumer adoption and acceptance of chatbot applications, as well as its central role in shaping user attitudes and interactions with chatbot technologies.
Performance expectancy, derived from the UTAUT model, appears in 22 empirical studies, with 21 reporting significant relationships and one reporting a non-significant effect. Adoption and intention of adopting are the most frequently examined dependent variables. A substantial proportion of these studies were conducted in China, and while multiple sectors have been explored, hospitality and tourism remain the most studied context. For instance, Ref. [77] identified performance expectancy as a key determinant of chatbot usage intention. Similarly, Ref. [12] found that performance expectancy, facilitating conditions, and social influence significantly influenced intention of using a coaching chatbot, whereas effort expectancy and perceived risk did not. Finally, Ref. [78] suggested that while performance expectancy is positively related to attitude towards chatbot adoption, chatbot specific characteristics, the perceived intelligence and anthropomorphism of the chatbot might play a more substantial role in driving adoption than performance expectancy alone.
Table 8. Example of studies on attributes driving consumer adoption of chatbots.
Table 8. Example of studies on attributes driving consumer adoption of chatbots.
AntecedentDependent VariableNumber of
Empirical Studies
Effect
Significance
Study
Perceived ease of use; perceived usefulnessAdoption or acceptance of chatbots33[3,76,79]
Perceived ease of use; perceived usefulnessAcceptance of chatbot (1)11 (2)[72]
Perceived ease of use; perceived usefulnessIntention of reusing chatbot22[65,80]
Perceived ease of use; perceived usefulness Perceived value of Chatbots11[81]
Perceived ease of useInitial Chatbot Trust1 1[67]
Perceived ease of use; perceived usefulnessSatisfaction11[82]
Perceived usefulnessAttitudes towards chatbots11[83]
Perceived usefulnessCustomer experience11[84]
Perceived ease of use; perceived usefulnessIntention of using chatbots11 (3)[85]
Performance
Expectation
Continuance intention of using chatbot11[78]
Performance
Expectation
Trust in chatbot 10 (4)[67]
Performance
Expectation
Intentions of adopting (use or accept) chatbots44[12,77,86,87]
1 Learning behavior as an indirect measure of Chatbot acceptance. 2 Only perceived usefulness has significant effect on chatbot acceptance. 3 Only perceived usefulness has significant effect on chatbot acceptance. 4 Non-significant effect.

5.1.3. Anthropomorphic Traits of Chatbots and Chatbots’ Personality

Anthropomorphism is another key factor influencing consumer adoption of chatbots. It is defined as “the attribution of humanlike characteristics, behaviours or mental states to non-human entities such as objects, brands, animals and, more recently, technological devices” [88] (p. 2). Anthropomorphism includes a wide range of traits from physical appearance to mental and emotional characteristics typically associated with humans [2,10,79,80,88]. Prior research distinguishes between two forms of anthropomorphism: conscious anthropomorphism and mindless anthropomorphism [9,89]. Conscious anthropomorphism occurs when consumers deliberately attribute human traits to chatbot applications [89]. For example, when consumers encounter a mobile chatbot application, they readily perceive the human traits of the application and expect the latter to behave just like a human being. In contrast, mindless anthropomorphism refers to unconscious perceptions and implicit expectations regarding how a system should behave [89].
Anthropomorphism has been investigated in 59 empirical studies, with 54 reporting significant relationships and five reporting non-significant effects. Adoption (32 studies) and satisfaction (41 studies) are the most frequently studied dependent variables (please see Table 9). Most studies were conducted in China, followed by India, the United Kingdom, and the United States. Research contexts were primarily non-sector specific (e.g., conversational AI in general) and within the hospitality and tourism sector.
Importantly, the ability of chatbots to demonstrate empathy appears particularly influential, while chatbots lacking empathetic cues risk being perceived as threatening to human identity [88]. Recent research also indicates that well designed anthropomorphic cues greatly enhance consumers’ intention of adopting chatbots [9].
The reviewed studies report several nuanced findings regarding the role of anthropomorphic traits in chatbot adoption. For instance, some evidence suggests that consumers may prefer chatbots that are less interactive [6]. Meanwhile, other studies find that chatbots capable of delivering an abundance of high-quality messages can contribute to consumer adoption even in the absence of appealing anthropomorphic traits [58]. Other studies reported that extroverted chatbot personalities have been associated with higher sales [96], and consumers are more readily to value chatbots that exhibit pleasant, accommodating, or engaging personalities [94]. Additionally, research concludes that consumers prefer interacting with chatbots that emulate personalities similar to their own (e.g., introverted chatbot interacting with introvert users), which enhances engagement [96]. However, anthropomorphism can also have adverse effects. For instance, one study, focusing on ‘creepy’ or ‘creepy-like’ anthropomorphic traits, found that perceived creepiness negatively affects loyalty towards chatbots, both directly and indirectly through trust and negative emotional responses [93]. Chatbots perceived as spooking consumers tend to decrease consumer loyalty [93] and increase privacy concerns, specifically when perceived risks during interactions are salient [45].
While most studies on anthropomorphic traits support their positive influence on chatbot adoption, others suggest that these traits alone are not sufficient to ensure adoption [9,70,93]. Chatbots must be capable of emulating and resembling human beings as much as possible [9,58,84,92]. Recent studies stress the importance of chatbots acting kindly, being pleasant-looking like real human beings, and avoiding anthropomorphic traits that might spook consumers [93,94]. Research also shows that the ability of chatbots, to detect and respond to consumer mood and respond accordingly, is often more effective than responding solely to consumer functional needs. These studies stress empathy as a central driver of consumer adoption of chatbots.

5.1.4. Emotions Regarding Chatbot Adoption

Emotions elicited by chatbots play a critical role in shaping consumer adoption and sustained use. For instance, Ref. [98] have shown that the use of emojis in chatbot communication increases interaction satisfaction by fostering a sense of intimacy. This effect was strongest among consumers with hedonic rather than utilitarian goals, stressing the role of emotional enjoyment in chatbot adoption [98]. Similarly, Ref. [99] investigated hedonic motivation further and found that it is a significant predictor of chatbot adoption among Generation Z, alongside social influence and habit. These findings suggest that consumers adopt chatbots not only for their functional benefits, but also because they are engaging and enjoyable. Emotions have been investigated in 21 empirical studies, with 19 significant relationships and two non-significant relationships. In these studies, adoption, intention of adopting, and satisfaction are the most frequently studied dependent variables. Most studies were realized in China, followed by the United Kingdon and the United-States, and were largely conducted in non-sector-specific contexts.
Prior research distinguishes between positive and negative emotional responses elicited by chatbot interactions [22,100]. For instance, anthropomorphism, perceived warmth, empathetic language, trust, and enjoyment have been shown to enhance satisfaction, engagement, and acceptance of chatbots [100]. Conversely, privacy concerns, unmet expectations, and failures to accurately understand user queries can elicit dissatisfaction and negative emotions, potentially leading to resistance and negative word-of-mouth if left unaddressed [22,24]. Experimental studies conducted in China on intelligent voice assistants further reveal that privacy concerns can generate perceptions of creepiness which, in turn, increase consumer resistance, particularly when chatbots are designed with “servant-like” characteristics [101]. A meta-analysis of 42 studies by [101] has shown that moderate levels of anthropomorphism increase perceived social presence and trust, whereas excessive levels of human-like features evoke consumer fear, reflecting the uncanny valley effect. Completing these findings, a recent qualitative study conducted in the United Kingdom and Qatar, with users of Siri, Google Assistant, and Replika, have shown that some consumers develop emotionally committed relationships with AI chatbots, using them for personal development, companionship, or coping with trauma [102]. Overall, these studies show that emotional responses to chatbots (e.g., intimacy, empathy, and enjoyment) contribute to shaping consumers’ attitudes (e.g., adoption) and sustained usage across contexts. These findings underscore the importance of designing emotionally attuned chatbot interactions that balance human-like qualities with user comfort to sustain long-term adoption.
The chatbot adoption literature is largely dominated by utilitarian constructs (e.g., perceived usefulness, perceived ease of use, and performance expectancy), while affective and emotional dimensions remain comparatively underexplored, an imbalance with important theoretical implications. Although some recent studies incorporate affect-laden constructs, these are generally treated as secondary rather than as central psychological mechanisms shaping attitudes or intentions. Future research should therefore integrate affective processes as core components of pre-adoption and usage models by extending acceptance frameworks with affect-centered perspectives, thereby better reflecting the hybrid utilitarian–affective nature of human–AI interactions and aligning adoption models with users’ emotionally grounded experiences.

5.2. Barriers to Consumer Adoption of Chatbots

Barriers to consumer chatbot adoption encompass factors that hinder consumer willingness to adopt or continue using chatbot applications. These barriers have been examined in 13 empirical studies, all reporting significant effects. Among them, perceived risk emerges as a central barrier to chatbot adoption and has been tested in four studies [45,62,103,104] with consistent negative effects reported across all cases. Perceived risk refers to users’ uncertainty regarding chatbot use due to potential adverse consequences associated with the disclosure of personal information [45,62,104]. For example, Ref. [104] reported that perceived risk reduces customer satisfaction and negatively affects continuance intention. Technological anxiety constitutes another barrier and is examined in four studies [45,63,103,105]. Technological anxiety refers to the fear and discomfort individuals experience when interacting with new technologies [103]. For instance, Ref. [103] investigated the joint effects of perceived risk (time and privacy risk), individual innovation barriers (technological anxiety and openness to experience), and cognitive perceptions (ease of use, usefulness, convenience, and information quality) on attitudes towards continued chatbot use and continuance intention. Their findings indicate that perceived time risk, perceived privacy risk, technological anxiety, and perceived information quality significantly influence attitudes, although only perceived privacy risk directly affects continuance intention.
Evidence regarding the role of technological anxiety is, however, mixed. While Ref. [63] find no significant relationship between technological anxiety and chatbot adoption in the hospitality and tourism sector, Ref. [105] demonstrate that technological anxiety moderates the relationship between chatbot quality dimensions (e.g., understandability, reliability, responsiveness, assurance, and interactivity) and users’ post-use confirmation in online travel agencies. Specifically, higher levels of technological anxiety strengthen the relationship between chatbot quality and confirmation which, in turn, enhances satisfaction and continuance intention.
Additional research links barriers to perceptions of creepiness. For example, Ref. [93] show that usability concerns, privacy concerns, and user characteristics such as technological anxiety and need for human interaction increase perceptions of chatbot creepiness, which subsequently affect negative emotions, trust, and chatbot loyalty (e.g., reuse and recommendation intentions). Regarding privacy concerns, the findings suggest that the effects are context-dependent. In particular, Ref. [93] argue heightened privacy concerns in their study may be explained by the sensitive personal information required during car insurance chatbot interactions. More recently, Ref. [101] find that privacy concerns related to intelligent voice assistants increase perceived creepiness, which in turn mediates consumer resistance.

5.3. Why Findings Differ? Boundary Conditions in Chatbot Adoption

Although several constructs such as perceived usefulness, trust, and anthropomorphism are frequently identified as significant predictors of chatbot adoption, the review also reveals variation in effect size and statistical significance across studies. These differences do not necessarily indicate theoretical inconsistency, but rather reflect boundary conditions shaping chatbot adoption outcomes. Five recurrent patterns emerge. First, industry context matters. In high-risk sectors such as banking and insurance, trust, data protection, and perceived risk play a more central role than anthropomorphic cues. Conversely, in retail and hospitality contexts, anthropomorphism, empathy, and experiential value appear more influential in shaping satisfaction and adoption intentions. In public service settings, perceived usefulness and clarity of information tend to dominate, while hedonic and relational elements are less salient. Second, the chatbot type influences adoption mechanisms. Informational chatbots (e.g., FAQ or knowledge-based bots) rely more heavily on utilitarian constructs such as performance expectancy and information quality. In contrast, transactional chatbots (e.g., booking or financial bots) require higher levels of trust and perceived reliability which can be of the utmost importance within the hospitality sector. Furthermore, voice-based chatbots tend to amplify perceptions of social presence and humanness compared to text-based interfaces, potentially increasing both trust and creepiness effects depending on design. Third, the adoption stage shapes determinant strength. During trial or initial adoption phases, perceived ease of use, compatibility, and initial trust are particularly influential. In contrast, during routine or continuance usage, satisfaction, reliability, emotional engagement, and habit become more salient predictors. Fourth, measurement differences contribute to variation. Studies operationalize constructs such as trust, satisfaction, and anthropomorphism using different scales and conceptualizations (e.g., multidimensional vs. unidimensional trust measures). These operational variations may partially explain inconsistent findings across contexts. Fifth, geographic and regulatory contexts moderate adoption dynamics. Many studies are conducted in China and other Asian markets, where collectivist norms and technology acceptance patterns may differ from Western contexts. Moreover, regulatory environments emphasizing data protection (e.g., stricter privacy regulations) may amplify perceived risk and privacy concerns, thereby attenuating the effects of anthropomorphic or hedonic drivers. Taken together, these boundary conditions suggest that chatbot adoption is contingent upon contextual, technological, and cultural factors. Future theoretical models should therefore incorporate moderators explicitly rather than assuming universal predictor strength across settings.

5.4. Robust and Context-Dependent Drivers of Chatbot Adoption

A distinction emerges between drivers that demonstrate relatively consistent empirical support and those whose effects appear more context-dependent. Core technology acceptance constructs such as perceived usefulness, perceived ease of use, and trust consistently show positive relationships with chatbot adoption across studies and service contexts. These variables appear to represent robust drivers reflecting utilitarian evaluations of chatbot performance and interaction efficiency. However, several socially and affectively oriented construct such as anthropomorphism, social presence, emotional engagement, privacy concerns, and technological anxiety may display more heterogeneous effects. Their influence tends to vary depending on contextual moderators such as service sector, chatbot type (text vs. voice), user characteristics, cultural norms, and regulatory environments. Table 10 below presents a summary table of separating robust drivers from contingent drivers.

6. Discussion and Future Research

This systematic literature review provides a comprehensive overview of research on the adoption and sustained use of chatbots. The study involved a meticulous selection process, resulting in the inclusion of 202 relevant manuscripts. The publications spanned multiple disciplines, including management information systems, marketing, and consumer behavior, and cover a variety of sectors such as tourism, hospitality, and e-commerce. They also draw on a wide range of models and theories, such as TAM, UTAUT, he Social Presence Theory, and the Trust-Commitment Theory.
In the subsequent paragraphs, we will thoroughly discuss several key issues identified in the systematic review findings. This analysis aims to establish a groundwork for future research endeavors focusing on the factors influencing both the adoption and continuous usage of chatbots.
  • The bulk of studies into chatbot adoption by consumers is based on theoretical models such as TAM, UTAUT and, to a lesser extent, U&G models. In contrast, we observe a dearth of research into consumer adoption of chatbots based on theoretical models such as task technology fit (TTF), the theory of reasoned action (TRA), or the consumer acceptance of technology model (CAT). TAM and UTAUT, which dominated the literature, adopt a predominantly utilitarian and cognitive perspective, with a strong emphasis on efficiency, performance, and instrumental value. By contrast, the CAT model acknowledges that consumers adopt technologies in general not only for their functional benefits, but also for their ability to generate emotional engagement, enjoyment, and stimulation. It integrates emotional dimensions such as pleasure and arousal, which capture the valence and intensity of emotional responses. Future research that integrates multiple theories and concurrent models is essential to enrich our understanding of chatbot adoption through complementarity. By jointly considering utilitarian, affective, social, and risk-related perspectives, such integrative approaches can provide a more comprehensive and nuanced understanding of the relative influence of adoption drivers and barriers.
  • The effects of perceived ease of use, usefulness, and, to a lesser extent, anthropomorphism, are generally constant in the reviewee studies. The few non-significant effects reported in the literature could be explained by contextual, temporal, and design-related contingencies. First, the salience of utilitarian predictors such as perceived ease of use and usefulness appears to depend on the stage of interaction. These constructs are more influential at early adoption stages or prior to actual use, whereas their effects often weaken once users gain experience and shift attention towards trust, emotional responses, and privacy concerns. This pattern is evident in studies showing that when perceived privacy risk or technological anxiety is introduced into adoption models, traditional TAM variables become non-significant or having less effect, especially in high-risk or data-intensive service contexts. This indicates the necessity for a nuanced investigation that considers both the timing of chatbot use and the specific characteristics of the study context. On the one hand, emphasis should be placed on factors such as perceived ease of use, perceived usefulness, and perceived convenience, particularly when customers are yet to experience the chatbot service. On the other hand, perceived privacy risk and technological anxiety may assume greater relevance when the use of chatbots involves the sharing of personal information. For instance, in high-stakes or data-sensitive domains (e.g., banking, insurance, healthcare), privacy risk and creepiness become critical, overshadowing ease of use, usefulness and social cues (e.g., anthropomorphic cues). Second, future studies should consider the non-linear effect of anthropomorphic cues. While moderate anthropomorphic cues enhance social presence, trust, and emotional engagement, excessive human-likeness can trigger discomfort and uncanny valley responses, resulting in resistance rather than acceptance. Third, user characteristics (e.g., technology anxiety, need for human interaction, personality traits) further explain variability, as these traits amplify or attenuate responses to both functional and social chatbot features. Taken together, these findings suggest that the mixed or sometimes non-significant effects reported in the literature do not indicate theoretical weakness or inconsistency, but rather underscore the highly-contingent nature of chatbot adoption, which is jointly shaped by interaction timing, sector-specific risk, design intensity, and user heterogeneity. Future research should therefore move beyond testing main effects in isolation and investigate boundary conditions and interaction effects.
  • Despite the recognized importance of emotions in service experiences, relatively few studies have examined consumer emotions in the context of chatbot interactions. Only a limited number of studies investigate how consumers’ emotional experiences with chatbots influence outcomes such as satisfaction and trust [106]. Consequently, chatbots that successfully convey both human-like traits and emotional cues are more likely to be perceived as engaging and effective. Existing evidence further suggests that consumers value not only the quality of information provided, but also the emotional aspects of their interactions with chatbots. Future research should therefore adopt a more integrative perspective by jointly examining utilitarian and hedonic dimensions when investigating chatbot adoption, satisfaction, trust, and continued use. Further studies are needed to explore how hedonic versus utilitarian goals shape emotional enjoyment during chatbot interactions across different consumer segments. Key research questions include whether emotionally expressive chatbots are perceived as more human, how the display of human-like emotions affects satisfaction and adoption, and how emotional mechanisms differ across hedonic and utilitarian service contexts (e.g., entertainment vs. banking). Building on recent qualitative evidence highlighting the emergence of emotional bonds and reliance on conversational AI [105], future research could also examine the sustainability of such relationships and their long-term implications for continued use and dependency. Moreover, comparative studies across sectors are needed to assess how contextual risk and user expectations (e.g., finance, e-government) interact with emotional responses such as trust, empathy, and enjoyment, to shape chatbot adoption and resistance.
  • In the case of service failure, consumers show reluctance to use and trust chatbots. For instance, Ref. [107] report that customers tend to hold persistent beliefs that chatbots lack emotional competence, meaning that apologies and explanations provided after service failures could be generally better received when delivered by human employees rather than by chatbots. These findings raise questions regarding whether strategies, such as design cues, training signals, or transparency mechanisms, could help change consumers’ beliefs about the emotional capabilities of chatbots. If such beliefs can be reshaped, chatbots may become comparably effective to human employees in managing service recovery situations. For training signals, future research could compare two experimental scenarios in which customers are either informed or not that the chatbot has been trained on customer service situations, emotional expressions, or service recovery protocols. This comparison would help assess whether signaling emotional training influences consumer perceptions and responses. Regarding transparency mechanisms, future studies could examine consumer perceptions and beliefs about how chatbots process emotions, detect sentiment, and decide when to transfer an interaction to a human agent. This transparency may reduce skepticism and mitigate negative beliefs about chatbots’ emotional incapacity. Similar research could also investigate the effects of moderate anthropomorphic cues. Cues could involve emotion-sensitive wording, empathetic language (e.g., “I understand how frustrating this situation can be”), adaptive tone (e.g., a chatbot may adopt a reassuring tone when users express anxiety and a friendly tone in a hedonic context), or moderated anthropomorphic features (e.g., warm greetings, acknowledgment of frustration).
  • In the literature related to the role of anthropomorphism in chatbot adoption, increasing attention is directed towards the role of chatbot personality. Anthropomorphic design not only involves visual or linguistic human-likeness, but also the attribution of stable personality traits to chatbots, which influence consumer perceptions, emotional responses, and adoption outcomes. In this regard, personality-based frameworks such as the Big Five personality model could help researchers better comprehend whether chatbots that emulate consumer-like personal traits are more effective in promoting consumer adoption, engagement, and satisfaction. For instance, [92] specifically examine the role of extroversion and demonstrate that an extroverted chatbot is better suited for interaction with extroverted users, while introverted users respond more favorably to less outgoing chatbot personalities. However, existing research remains largely limited, and a more comprehensive investigation of personality dimensions using the Big Five model is warranted. Future research could explore whether chatbot personalities can be strategically tailored through linguistic style, tone, and response patterns to better align with consumer personality profiles. Such personalization could be particularly beneficial in emotionally-charged or high-stress contexts (e.g., service disruptions and prolonged travel), where consumers are more vulnerable to negative emotions, emotional strain, and anxiety, increasing the likelihood of service interaction failures. Therefore, future studies can move beyond studying generic human-likeness to provide a more nuanced understanding of how specific human-like traits influence emotional responses, emotional engagement, trust, satisfaction, and continuance use of chatbots.
  • Regarding methodological issues, the systematic review highlights a predominant reliance on quantitative studies. In Section 4, we have reported that many issues, such as an overreliance on recruiting study participants through an online platform (e.g., MTurk), may pose potential challenges to the external validity of data, as it draws from a sizable pool of candidates [108]. Indeed, a significant portion of the selected studies relies on student, convenience, or snowball samples, potentially introducing scientific bias due to participants reaching out to individuals they are familiar with. We encourage scholars to undertake more qualitative research, as it holds the potential to offer a deeper understanding of this phenomenon and contribute to the development of novel theories or the expansion of existing ones in the realm of chatbot adoption. Our systematic review also points out a dearth of longitudinal studies on the phenomenon. Longitudinal designs would be particularly valuable for capturing how utilitarian, affective, social, and risk-related drivers evolve over time and across interaction stages. Lastly, most studies are confined to specific countries, introducing potential biases. To mitigate these biases, researchers are encouraged to conduct multi-country and multi-sector studies, thereby enhancing the external validity of their research.
  • Another important methodological observation concerns the geographic concentration and sampling approaches used in the empirical literature. As reported earlier in Section 4, a significant proportion of studies rely on convenience samples and are conducted in a relatively small number of countries, particularly China, the United States, India, and the United Kingdom. While these contexts provide valuable insights into chatbot adoption, this concentration may influence the generalizability of certain findings. Constructs such as trust, anthropomorphism, privacy concerns, and technological anxiety are likely to be sensitive to cultural norms, institutional environments, and regulatory frameworks. For instance, perceptions of privacy risk and data protection may differ significantly between regions with strong regulatory regimes and those with more permissive data practices, while anthropomorphic design cues may be interpreted differently depending on cultural attitudes toward human–AI interaction. Therefore, the strength and direction of these relationships may vary across cultural contexts. Future research should thus expand empirical investigations to a broader range of geographic regions and employ cross-cultural comparative designs to better assess the robustness of these mechanisms across institutional and cultural environments.

7. Study Limitations

This research has three limitations. First, to select the relevant articles for this systematic review, we initially excluded scientific articles and works primarily related to the field of information technology. Our focus was then narrowed to scientific papers published in peer-reviewed journals, including conceptual articles, literature reviews, and empirical studies (both quantitative and qualitative). Papers selected for inclusion needed to specifically address factors explaining the adoption or non-adoption of chatbots.
Secondly, it is essential to acknowledge that the search terms employed, and the screening process used, may not have captured all potentially relevant studies concerning chatbot adoption. By choosing to focus our research on the three prominent electronic databases, ABI/Inform Global, Business Source Premier, and Web of Science, there is a possibility that some relevant articles were inadvertently and unintentionally overlooked. Despite this limitation, our confidence in the thoroughness of reference checks and meticulous overall procedures assures us that we have compiled a representative set of articles. This approach adopted in this study decreases the likelihood that any omitted articles would substantially impact the findings presented in this study.
Thirdly, we concentrated exclusively on the technological and psychological factors influencing consumer adoption of chatbots. Subsequent reviews could explore additional non-technological and non-psychological drivers of chatbot adoption.

8. Conclusions

Research on consumer adoption of chatbots has experienced significant growth in recent years. This study is timely as it offers a comprehensive review and analysis of the current landscape in chatbot adoption research, with valuable implications for researchers. Our primary focus is a systematic review and synthesis of the literature on factors influencing chatbot adoption. We conducted an analysis of chatbot research from January 2015 to December 2025. Addressing RQ1, which describes publication trends, including relevant journals and highly-cited papers, our review quantifies key indicators across the literature. The results show a strong year-on-year growth in chatbot adoption research, highlighting notable publications, addressing timely and relevant issues, frequently cited in the literature. Addressing RQ2, which analyzes the methodological approaches and theoretical models used in the field, empirical studies significantly dominate, whereas the hospitality and the tourism sector emerges as the most extensively studied, followed by the e-commerce sector. Most studies used cross-sectional designs, and about half of the samples included at least 200 participants. Most of the studies were conducted in China (n = 35) and the United States (n = 20). The predominant methods of analysis are linear regressions and structural equation modelling. Regarding the theoretical models, our findings reveal that conventional adoption models, such as TAM and UTAUT, are widely employed. While these models provide valuable insights into initial technological acceptance by consumers, our analysis underscores their limitations in sustaining adoption and prolonged usage over time. Our research also shows that despite a strong interest towards emotions stemming from chatbots and how they can lead to their adoption, there is still major interest in technology-based models.
Addressing RQ3, which investigates the drivers and barriers to consumers’ adoption of chatbots, our study shows that perceived usefulness, perceived ease of use, anthropomorphic characteristics, consumer trust in chatbot applications, and the ability of chatbots to emulate a human-like personality are key drivers of chatbot adoption, whereas perceived risks and anxiety associated with chatbot uses are the most studied barriers. Addressing RQ4, which outlines directions for future research, the study recommends integrating complementary theoretical perspectives, further exploring emotions and personality-based mechanisms, expanding research across diverse demographics and sectors, and employing richer qualitative and cross-cultural methods to deepen the understanding of chatbot adoption.
From a practical standpoint, this study categorizes the factors influencing chatbot adoption into distinct categories, providing valuable insights for marketers to understand their relative significance. Marketers can leverage this understanding to tailor strategies and approaches, addressing specific factors that drive or hinder chatbot adoption. Indeed, aligning chatbot features, such as anthropomorphic characteristics and personality traits, more closely with consumer needs and preferences, marketers can deliver an improved user experience with chatbot applications. This will lead to increased adoption reuse of chatbot applications and an effective engagement among users. More precisely, marketing professionals need to consider trust and engagement. For example, adding human-like traits (e.g., writing names, humor, and empathy) can enhance trust and engagement from the consumer end. Thus, marketers could test levels of human-likeness to find the appropriate spot for their target consumer audience. In addition, marketing professionals should consider chatbots to tailor recommendations, promotions, and support to specific consumers. Research has shown that transparency from the chatbot’s end is critical to establish trust with users. Thus, taking this into consideration, marketing professionals should make sure that chatbots clearly disclose their identity to consumers so that consumers know full well they are interacting with an artificial intelligence chatbot. Also, marketing professionals should be careful to position the chatbot as an assistant rather than a human being. This may help prevent consumer distrust towards chatbots. It is also important that consumers trust the AI chatbot as a legitimate extension of the brand. Indeed, research indicates that the chatbot’s communication style should closely align with the brand’s tone, for example, using a professional tone within the banking and finance industry, or adopting a more playful style for a lifestyle brands. Since recovery experiences strongly influence customer satisfaction, marketers should design escalation protocols to prevent customer frustration. From an ethical perspective, since customers are increasingly wary of data misuse, marketers should ensure that chatbot data collection is communicated transparently. In addition, careful training of chatbot AI is necessary to avoid biased or offensive responses that could damage brand reputation. Finally, inclusive design may increase acceptance across diverse audiences.

Author Contributions

Conceptualization, J.-M.L. and R.L.; methodology, J.-M.L. and R.L.; software, J.-M.L.; validation, R.L. and J.-M.L.; formal analysis, J.-M.L.; investigation, J.-M.L.; resources, J.-M.L.; data curation, J.-M.L. and R.L.; writing—original draft preparation, J.-M.L.; writing—review and editing, R.L.; visualization, J.-M.L. and R.L.; supervision, R.L.; project administration, R.L.; funding acquisition, R.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
AIDUAArtificially Intelligent Device Use Acceptance
CASAComputers Are Social Actors
DOIDiffusion of Innovation
ELMElaboration Likelihood Model
HCIHuman–Computer Interaction
IDTInnovation Diffusion Theory
NLPNatural Language Processing
SEMStructural Equation Modeling
SLRSystematic Literature Review
SORStimulus–Organism–Response
TAMTechnology Acceptance Model
TRATheory of Reasoned Action
TTFTask–Technology Fit
U&GUses and Gratifications
UTAUTUnified Theory of Acceptance and Use of Technology
UTAUT2Unified Theory of Acceptance and Use of Technology 2

Appendix A. Review of Prior Reviews of Chatbots and Distinct Contribution of the Present Study

StudyScopeTime WindowMethodUnique Contribution of Prior StudyHow Our Study Differs in Terms of Research Objectives and Contributions
[109]Marketing chatbots2000–2019SLR (n = 53)Morphological taxonomy (264 variants)Adds effect-direction coding (±/mixed). Our study integrates drivers and barriers, cross-industry moderation (retail/banking/health/public), evaluates construct redundancy.
[35]Human–chatbot interaction2010–2020SLR (n = 83)Experiential synthesis over a decadeOur study unifies cognitive and affective and technology and context drivers, codes statistical consistency. It also compares sectors rather than HCI-only focus.
[110]Tourism evolution2016–2020SLR (n = 27)Tourism-focused implementation taxonomyOur study moves from technical taxonomy to empirically validated psychological determinants. It also conducts a multi-sector comparison.
[36]Chatbots & loyalty2015–2021SLR (n = 41)Loyalty linkage modelOur study builds full antecedent → mediator → outcome chain. It also tests industry/modality boundary effects.
[111]Adoption mapping2000–2023SLR (n = 219)Large-scale theoretical mappingOur study codes statistical direction & contradictions. It assesses measurement heterogeneity and evaluates construct overlap.
[112]AI chatbot adoption2017–2024SLR (n = 61)Conceptual AI adoption frameworkOur study integrates barriers and outcomes, compares AI vs. rule-based bots and identifies sectoral moderators.
[90]Drivers & barriers2010–2024SLR (n = 84)Determinant integrationOur study links determinants to specific behavioral outcomes. It also contrasts regulated industries vs. non-regulated industries.
[41]Higher education chatbots2019–2023SLR (n = 40)Education-focused synthesisOur study extends beyond education and enables cross-sector boundary testing.
[42]AI in higher education2022–2024SLR (n = 37)Activity-theory modelOur study integrates TAM/UTAUT/SOR/TPB simultaneously and it tests robustness across industries.
[113]Hospitality2019–2023SLR (n = 48)Hospitality implementation roadmapOur study compares hospitality vs. retail/banking/health; identifies contextual boundary conditions.
[114]Healthcare2010–2024SLR (n = 84)Healthcare research agendaOur study compares healthcare to other industries. Our study identifies cross-sector invariants vs. healthcare-specific moderators (regulatory intensity).
[85]TAM validation2008–2022Meta-analysis (n = 70)Quantitative SEM validation of TAMOur study extends beyond TAM-only, integrates affective, technological, contextual inhibitors and industry moderation.
[104]Adoption intention2000–2023Meta-analysis (n = 54)Moderator reconciliationOur study moves beyond intention. It also integrates loyalty, satisfaction, resistance constructs.
[115]Trust2005–2024Meta-analysis (n = 54)Trust-centered synthesisOur study situates trust within multi-driver ecosystem. It also tests interaction with usefulness/privacy.
[40]Anthropomorphism2010–2024Meta-analytic SEM (n = 32)SOR pathway validationOur study positions anthropomorphism alongside usefulness/risk/privacy, cross-industry strength comparison.
[28]Startup theories2020–2024Narrative ReviewStartup theoretical mappingOur study provides empirical cross-sector validation beyond theory classification.
[116]Customer experience2019–2022Narrative ReviewMarketing-focused CE synthesisOur study integrates CE with determinants, inhibitors, effect-direction coding and provides beyond marketing-only samples.

Appendix B. Database-Specific Search Strings

DatabaseFields SearchedDatabase-Specific Search Syntax
ABI/Inform Global (ProQuest)Title (TI)
Abstract (AB)
Keywords (KW)
TI, AB, KW ((“chatbot” OR “chat bot” OR “virtual assistant” OR “chatterbot” OR
“conversational agent” OR “natural language interface” OR “talkbot” OR “talk bot” OR “ai assistant” OR “artificial intelligence assistant” OR “automated assistant” OR “intelligent assistant” OR “intelligent virtual agent” OR “ai-powered Assistant” OR “automated conversational system” OR “conversational ai” OR “service chatbot” OR “customer service chatbot” OR “intelligent agent” OR “digital assistant” OR “voice assistant” OR “embodied conversational agent” OR “ECA” OR “dialogue system”) AND (“consumer” OR “user” OR “customer” OR “experience” OR “attitude” OR “emotion” OR “interaction” OR “conversation” OR “acceptation” OR “adoption” OR “technology adoption” OR “use intention” OR “behavioral intention” OR “continuance intention” OR “post-adoption” OR “continued use” OR “customer satisfaction” OR “user engagement” OR “customer engagement” OR “service experience”) AND (“theory of planned behavior” OR “TPB” OR “TAM” OR “technology acceptance model” OR “TTF” OR “task technology fit” OR “UTAUT” OR “theory of acceptance and usage of technology” OR “ITM” OR “initial trust model” OR “ELM” OR “elaboration likelihood model” OR “TRA” OR “theory of reasoned action” OR “IDT” OR “innovation diffusion theory” OR “SOR” OR “stimulus organism-response” OR “expectation-confirmation model” OR “anthropomorphism” OR “theory-of-mind” OR “U&G Model” OR “users and gratification model” OR “trust-commitment theory” OR “consumer acceptance model” OR “diffusion of Innovation theory” OR “technology readiness” OR “emotion” OR “emotional response” OR “affect” OR “affective response” OR “social presence” OR “perceived warmth” OR “perceived intelligence” OR “humor” OR “perceived humor” OR “empathy” OR “perceived empathy” OR “anthropomorphism” OR “human-likeness” OR “humanlike” OR “mind perception” OR “uncanny valley” OR “parasocial interaction” OR “symbolic interaction” OR “emotional attachment” OR “relational warmth”)
Business Source Premier (EBSCOhost)Title (TI)
Abstract (AB)
Subject Terms (SU)
(TI OR AB OR SU) (“chatbot” OR “chat bot” OR “virtual assistant” OR “chatterbot” OR “conversational agent” OR “natural language interface” OR “talkbot” OR “talk bot” OR “ai assistant” OR “artificial intelligence assistant” OR “automated assistant” OR “intelligent assistant” OR “intelligent virtual agent” OR “ai-powered assistant” OR “automated conversational System” OR “conversational ai” OR “service chatbot” OR “customer service chatbot” OR “intelligent agent” OR “digital assistant” OR “voice assistant” OR “embodied conversational agent” OR “eca” OR “dialogue system”) AND (“consumer” OR “user” OR “customer” OR “experience” OR “attitude” OR “emotion” OR “interaction” OR “conversation” OR “acceptation” OR “adoption” OR “technology adoption” OR “use intention” OR “behavioral intention” OR “continuance intention” OR “post-adoption” OR “continued use” OR “customer satisfaction” OR “user engagement” OR “customer engagement” OR “service experience”) AND (“theory of planned behavior” OR “TPB” OR “TAM” OR “technology acceptance model” OR “TTF” OR “task technology fit” OR “UTAUT” OR “theory of acceptance and usage of technology” OR “ITM” OR “initial trust model” OR “ELM” OR “elaboration likelihood model” OR “TRA” OR “theory of reasoned action” OR “IDT” OR “innovation diffusion theory” OR “SOR” OR “stimulus organism-response” OR “expectation-confirmation model” OR “anthropomorphism” OR “theory-of-mind” OR “U&G model” OR “users and gratification model” OR “trust-commitment theory” OR “consumer acceptance model” OR “diffusion of innovation theory” OR “technology readiness” OR “emotion” OR “emotional response” OR “affect” OR “affective response” OR “social presence” OR “perceived warmth” OR “perceived intelligence” OR “humor” OR “perceived humor” OR “empathy” OR “perceived empathy” OR “anthropomorphism” OR “human-likeness” OR “humanlike” OR “mind perception” OR “uncanny valley” OR “parasocial interaction” OR “symbolic interaction” OR “emotional attachment” OR “relational warmth”))
Web of Science Core CollectionTopic (TS)TS = (“chatbot” OR “chat bot” OR “virtual assistant” OR “chatterbot” OR “conversational agent” OR “natural language interface” OR “talkbot” OR “talk bot” OR “ai assistant” OR “artificial intelligence assistant” OR “automated assistant” OR “intelligent assistant” OR “intelligent virtual agent” OR “ai-powered assistant” OR “automated conversational system” OR “conversational ai” OR “service chatbot” OR “customer service chatbot” OR “intelligent agent” OR “digital assistant” OR “voice assistant” OR “embodied conversational agent” OR “ECA” OR “dialogue system”) AND (“consumer” OR “user” OR “customer” OR “experience” OR “attitude” OR “emotion” OR “interaction” OR “conversation” OR “acceptation” OR “adoption” OR “technology adoption” OR “use intention” OR “behavioral intention” OR “continuance intention” OR “post-adoption” OR “continued use” OR “customer satisfaction” OR “user engagement” OR “customer engagement” OR “service experience”) AND (“theory of planned behavior” OR “TPB” OR “TAM” OR “technology acceptance model” OR “TTF” OR “task technology fit” OR “UTAUT” OR “theory of acceptance and usage of technology” OR “ITM” OR “initial trust model” OR “ELM” OR “elaboration likelihood model” OR “TRA” OR “theory of reasoned action” OR “IDT” OR “innovation diffusion theory” OR “SOR” OR “stimulus organism-response” OR “expectation-confirmation model” OR “anthropomorphism” OR “theory-of-mind” OR “U&G Model” OR “users and gratification model” OR “trust-commitment theory” OR “consumer acceptance model” OR “diffusion of Innovation theory” OR “technology readiness” OR “emotion” OR “emotional response” OR “affect” OR “affective response” OR “social presence” OR “perceived warmth” OR “perceived intelligence” OR “humor” OR “perceived humor” OR “empathy” OR “perceived empathy” OR “anthropomorphism” OR “human-likeness” OR “humanlike” OR “mind perception” OR “uncanny valley” OR “parasocial interaction” OR “symbolic interaction” OR “emotional attachment” OR “relational warmth”))
Note: In all databases, filters were applied for peer-reviewed journal articles, English language, and publication period January 2015–December 2025.

Appendix C. Data Extraction Template and Coding Definitions

FieldDefinition and Coding Rule
Author(s)Full list of authors as reported in the published article.
Publication YearYear of publication of the article. Articles in press were coded using the year indicated by the journal.
JournalJournal in which the article was published.
Study TypeCoded as quantitative, qualitative, mixed methods, systematic review, or meta-analysis, based on the primary methodology used.
Theoretical Framework(s)All explicitly stated theories or models used to explain chatbot adoption. Multiple theories were coded when applicable.
Key Variables/ConstructsIndependent, mediating, moderating, and dependent variables examined in relation to chatbot adoption or usage.
Drivers of AdoptionFactors empirically found to have a positive and statistically significant effect on chatbot adoption, usage intention, continuance intention, satisfaction, or acceptance.
Barriers to AdoptionFactors empirically found to have a negative and statistically significant effect on chatbot adoption or continued use.
Industry/SectorPrimary industry context studied (e.g., hospitality, banking, e-commerce, healthcare). Studies examining multiple industries (more than one) were coded as multi-sector.
Chatbot TypeCoded as task-oriented, service-oriented, social/relational, or mixed, based on the chatbot’s primary function.
Geographic ContextCountry or countries in which the empirical data were collected. Multi-country studies were coded separately.
Sample CharacteristicsSample size and participant type (e.g., consumers, students, customers, managers).
Analytical MethodMain analytical technique used (e.g., SEM, regression, ANOVA, thematic analysis).
Key FindingsMain findings related to chatbot adoption drivers and/or barriers.

Appendix D. Drivers and Barriers to Chatbot Adoption

This appendix summarizes the drivers and barriers identified across the 202 reviewed studies. Letters indicate effects consistency: A. Consistently Positive. B. Mostly Positive/Mixed. C. Significantly Negative.
CategoryConstructFreq.Stat.Main OutcomesCountry ConcentrationIndustry ContextChatbot Type
UtilitarianPerceived Usefulness39AAdoption, continuanceChina, Germany, UKHospitality, Retail, EducationTransactional & Informational (Text)
UtilitarianPerceived Ease of Use32BAdoption, trustChina, India, Korea, UKHospitality, E-commerceText-based
UtilitarianPerformance Expectancy22AAdoptionChina, India, Korea, UKEducation, BankingTransactional
UtilitarianEffort Expectancy18BAdoption intentionChina, UKEducation, RetailTransactional
UtilitarianFacilitating Conditions9BContinuanceIndia, China, UKEducationTransactional
UtilitarianCompatibility6AInitial trustChinaRetailTransactional
UtilitarianInformation Quality11ASatisfactionChinaFinance, E-commerceInformational
UtilitarianSystem Quality7AAdoptionMulti-countryService sectorsTransactional
UtilitarianService Quality8AExperienceChinaHospitalityService bots
UtilitarianConvenience5ATrustChinaOnline retailInformational
UtilitarianResponse Accuracy6ATrustChinaBankingTransactional
UtilitarianPerceived Intelligence10BAdoptionChina, USRetailAI-powered
UtilitarianPerceived Competence6ATrustChinaService sectorsTransactional
UtilitarianTask–Technology Fit4AAdoption intentionChinaEducationTransactional
UtilitarianPerceived Value5AAdoptionMulti-countryMixed sectorsTransactional
Trust-BasedTrust (overall)13AAdoption, loyaltyChina, UKBanking, InsuranceTransactional
Trust-BasedInitial Trust6AAdoptionChinaRetailTransactional
Trust-BasedTrust in Data Protection4AAdoptionEU, ChinaBanking, Public servicesData-intensive
Trust-BasedTrust in Functionality3AUsageChinaFinanceTransactional
Trust-BasedCredibility4ATrustChinaBankingTransactional
SocialAnthropomorphism59BAdoption, satisfactionChina, US, UKHospitality, RetailText & Voice; Social bots
SocialHuman-likeness14BAdoptionChinaRetailVoice-based higher
SocialEmpathy8AEngagementUK, ChinaHospitalityService & Social bots
SocialSocial Presence11ATrustChina, USService sectorsVoice-based
SocialAvatar Presence6BAdoptionChinaRetailVisual bots
SocialChatbot Personality7AAdoptionUS, ChinaRetailSocial bots
SocialPersonality Congruence4AEngagementChinaRetailSocial bots
SocialMind Perception5AClosenessChinaMixed sectorsAnthropomorphic bots
SocialParasocial Interaction3AAttachmentUK, QatarSocial botsCompanion bots
SocialPerceived Warmth5ATrustChinaHospitalityService bots
AffectiveEnjoyment9BContinuanceChina, IndiaHospitalityConversational bots
AffectiveHedonic Motivation8AAdoptionChina (Gen Z)Retail, EducationConversational
AffectiveEmotional Attachment4AContinuanceUKSocial botsCompanion bots
AffectiveIntimacy (Emojis)3ASatisfactionChinaRetailText-based
AffectiveEngagement6AAdoptionChinaRetailInteractive bots
AffectiveExperiential Value4AExperienceChinaHospitalityService bots
BarrierPerceived Risk4CAdoptionIndia, ChinaBanking, InsuranceTransactional
BarrierPrivacy Concerns6CAdoptionEU, USBanking, PublicVoice/Data bots
BarrierTechnological Anxiety4CContinuanceIndiaMixed sectorsAI-powered
BarrierCreepiness3CTrustChinaRetailHighly anthropomorphic
BarrierUncanny Valley2CAdoptionChinaRetailVoice-based
BarrierPerceived Time Risk2CContinuanceIndiaServiceTransactional
BarrierPerceived Sacrifice2BExperienceChinaE-commerceTransactional
BarrierAI Skepticism2CAdoptionUSPublic servicesAI bots
BarrierLow Technology 2CAdoptionIndiaMixedAI-powered

References

  1. Luo, X.; Tong, S.; Fang, Z.; Qu, Z. Frontiers: Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases. Mark. Sci. 2019, 38, 937–947. [Google Scholar] [CrossRef]
  2. Priya, B.; Sharma, V. Exploring users’ adoption intentions of intelligent virtual assistants in financial services: An anthropomorphic perspectives and socio-psychological perspectives. Comput. Hum. Behav. 2023, 148, 107912. [Google Scholar] [CrossRef]
  3. Rese, A.; Ganster, L.; Baier, D. Chatbots in retailers’ customer communication: How to measure their acceptance? J. Retail. Consum. Serv. 2020, 56, 102176. [Google Scholar] [CrossRef]
  4. Ciechanowski, L.; Przegalinska, A.; Magnuski, M.; Gloor, P. In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Futur. Gener. Comput. Syst. 2019, 92, 539–548. [Google Scholar] [CrossRef]
  5. Eren, B.A. Determinants of customer satisfaction in chatbot use: Evidence from a banking application in Turkey. Int. J. Bank. Mark. 2021, 39, 294–311. [Google Scholar] [CrossRef]
  6. Sheehan, B.; Jin, H.S.; Gottlieb, U. Customer service chatbots: Anthropomorphism and adoption. J. Bus. Res. 2020, 115, 14–24. [Google Scholar] [CrossRef]
  7. Nadarzynski, T.; Miles, O.; Cowie, A.; Ridge, D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digit. Health 2019, 5, 205520761987180. [Google Scholar] [CrossRef]
  8. Pérez, J.Q.; Daradoumis, T.; Puig, J.M.M. Rediscovering the use of chatbots in education: A systematic literature review. Comput. Appl. Eng. Educ. 2020, 28, 1549–1565. [Google Scholar] [CrossRef]
  9. Cai, D.; Li, H.; Law, R. Anthropomorphism and OTA chatbot adoption: A mixed methods study. J. Travel Tour. Mark. 2022, 39, 228–255. [Google Scholar] [CrossRef]
  10. Chi, O.H.; Denton, G.; Gursoy, D. Artificially intelligent device use in service delivery: A systematic review, synthesis, and research agenda. J. Hosp. Mark. Manag. 2020, 29, 757–786. [Google Scholar] [CrossRef]
  11. Corti, K.; Gillespie, A. Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human. Comput. Hum. Behav. 2016, 58, 431–442. [Google Scholar] [CrossRef]
  12. Terblanche, N.; Kidd, M. Adoption Factors and Moderating Effects of Age and Gender That Influence the Intention to Use a Non-Directive Reflective Coaching Chatbot. SAGE Open 2022, 12, 215824402210961. [Google Scholar] [CrossRef]
  13. Gkinko, L.; Elbanna, A. The appropriation of conversational AI in the workplace: A taxonomy of AI chatbot users. Int. J. Inf. Manag. 2023, 69, 102568. [Google Scholar] [CrossRef]
  14. Jang, M.; Jung, Y.; Kim, S. Investigating managers’ understanding of chatbots in the Korean financial industry. Comput. Hum. Behav. 2021, 120, 106747. [Google Scholar] [CrossRef]
  15. Lei, S.I.; Shen, H.; Ye, S. A comparison between chatbot and human service: Customer perception and reuse inten-tion. Int. J. Contemp. Hosp. Manag. 2021, 33, 3977–3995. [Google Scholar] [CrossRef]
  16. Liu, M.; Yang, Y.; Ren, Y.; Jia, Y.; Ma, H.; Luo, J.; Fang, S.; Qi, M.; Zhang, L. What influences consumer AI chatbot use intention? An appli-cation of the extended technology acceptance model. J. Hosp. Tour. Technol. 2024, 15, 667–689. [Google Scholar] [CrossRef]
  17. Liu, G.L.; Darvin, R.; Ma, C. Exploring AI-mediated informal digital learning of English (AI-IDLE): A mixed-method investigation of Chinese EFL learners’ AI adoption and experiences. Comput. Assist. Lang. Learn. 2024, 38, 1632–1660. [Google Scholar] [CrossRef]
  18. Leung, C.H.; Chan, W.T.Y. Retail chatbots: The challenges and opportunities of conversational commerce. Soc. Media Mark. 2020, 8, 68. [Google Scholar] [CrossRef]
  19. Ayanwale, M.A.; Molefi, R.R. Exploring intention of undergraduate students to embrace chatbots: From the vantage point of Lesotho. Int. J. Educ. Technol. High. Educ. 2024, 21, 20. [Google Scholar] [CrossRef]
  20. Tian, W.; Ge, J.; Zhao, Y.; Zheng, X. AI Chatbots in Chinese higher education: Adoption, perception, and influence among graduate students—An integrated analysis utilizing UTAUT and ECM models. Front. Psychol. 2024, 15, 1268549. [Google Scholar] [CrossRef]
  21. Wang, C.; Li, S.; Lin, N.; Zhang, X.; Han, Y.; Wang, X.; Liu, D.; Tan, X.; Pu, D.; Li, K.; et al. Application of Large Language Models in Medical Training Evaluation—Using ChatGPT as a Standardized Patient: Multimetric Assessment. J. Med. Internet Res. 2025, 27, e59435. [Google Scholar] [CrossRef]
  22. Fatima, J.K.; Khan, M.I.; Bahmannia, S.; Chatrath, S.K.; Dale, N.F.; Johns, R. Rapport with a chatbot? The un-derlying role of anthropomorphism in socio-cognitive perceptions of rapport and e-word of mouth. J. Retail. Consum. Serv. 2024, 77, 103666. [Google Scholar] [CrossRef]
  23. Esiyok, E.; Gokcearslan, S.; Kucukergin, K.G. Acceptance of Educational Use of AI Chatbots in the Context of Self-Directed Learning with Technology and ICT Self-Efficacy of Undergraduate Students. Int. J. Hum.–Comput. Interact. 2025, 41, 641–650. [Google Scholar] [CrossRef]
  24. Agnihotri, A.; Bhattacharya, S. Chatbots’ effectiveness in service recovery. Int. J. Inf. Manag. 2024, 76, 102679. [Google Scholar] [CrossRef]
  25. Zhang, H.; Qiu, S.; Wang, X.; Yuan, X. Robots or humans: Who is more effective in promoting hospitality services? Int. J. Hosp. Manag. 2024, 119, 103728. [Google Scholar] [CrossRef]
  26. Chin, H.; Yi, M.Y. Exploring the influence of user characteristics on verbal aggression towards social chat-bots. Behav. Inf. Technol. 2025, 44, 1576–1594. [Google Scholar] [CrossRef]
  27. Dastane, O.; Ooi, M.Y.; Aw, E.C.X.; Shyu, W.H.; Tan, G.W.H. Skip the AI-BOTs: Let’s have real conversations in human-centric services. J. Consum. Mark. 2025, 42, 484–497. [Google Scholar] [CrossRef]
  28. Najarian, A.; Hejazinia, R. An Examination of Technology Acceptance Models for Chatbot Adoption in Startup E-businesses: A Narrative Review. Int. J. Manag. Account. Econ. 2025, 12, 471. [Google Scholar] [CrossRef]
  29. Singh, D.; Kunja, S.R. Engaging guests for a greener tomorrow: Examining the role of hotel chatbots in encour-aging pro-environmental behavior. Tour. Hosp. Res. 2025. [Google Scholar] [CrossRef]
  30. Song, M.; Zhang, H.; Xing, X.; Duan, Y. Appreciation vs. apology: Research on the influence mechanism of chatbot service recovery based on politeness theory. J. Retail. Consum. Serv. 2023, 73, 103323. [Google Scholar] [CrossRef]
  31. Xiao, R.; Yazan, M.; Situmeang, F.B.I. Rethinking Conversation Styles of Chatbots from the Customer Perspective: Relationships between Conversation Styles of Chatbots, Chatbot Acceptance, and Perceived Tie Strength and Perceived Risk. Int. J. Hum.–Comput. Interact. 2025, 41, 1343–1363. [Google Scholar] [CrossRef]
  32. Ischen, C.; Araujo, T.; van Noort, G.; Voorveld, H.; Smit, E. “I Am Here to Assist You Today”: The Role of Entity, Interactivity and Experiential Perceptions in Chatbot Persuasion. J. Broadcast. Electron. Media 2020, 64, 615–639. [Google Scholar] [CrossRef]
  33. Tranfield, D.; Denyer, D.; Smart, P. Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review. Br. J. Manag. 2003, 14, 207–222. [Google Scholar] [CrossRef]
  34. Chen, H.L.; Vicki Widarso, G.; Sutrisno, H.A. ChatBot for Learning Chinese: Learning Achievement and Technol-ogy Acceptance. J. Educ. Comput. Res. 2020, 58, 1161–1189. [Google Scholar] [CrossRef]
  35. Rapp, A.; Curti, L.; Boldi, A. The human side of human-chatbot interaction: A systematic literature review of ten years of research on text-based chatbots. Int. J. Hum.–Comput. Stud. 2021, 151, 102630. [Google Scholar] [CrossRef]
  36. Jenneboer, L.; Herrando, C.; Constantinides, E. The Impact of Chatbots on Customer Loyalty: A Systematic Liter-ature Review. J. Theor. Appl. Electron. Commer. Res. 2022, 17, 212–229. [Google Scholar] [CrossRef]
  37. Żyminkowska, K.; Zachurzok-Srebrny, E. The Role of Artificial Intelligence in Customer Engagement and Social Media Marketing—Implications from a Systematic Review for the Tourism and Hospitality Sectors. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 184. [Google Scholar] [CrossRef]
  38. Ling, E.C.; Tussyadiah, I.; Tuomi, A.; Stienmetz, J.; Ioannou, A. Factors influencing users’ adoption and use of conversational agents: A systematic review. Psychol. Mark. 2021, 38, 1031–1051. [Google Scholar] [CrossRef]
  39. Greilich, A.; Bremser, K.; Wüst, K. Consumer Response to Anthropomorphism of Text-Based AI Chatbots: A Sys-tematic Literature Review and Future Research Directions. Int. J. Consum. Stud. 2025, 49, e70108. [Google Scholar] [CrossRef]
  40. Zhang, F.; Sheng, D. Anthropomorphism’s impact on chatbot adoption: A meta-analytic structural equation modeling approach. Technol. Soc. 2026, 84, 103099. [Google Scholar] [CrossRef]
  41. Anjulo Lambebo, E.; Chen, H.L. Chatbots in higher education: A systematic review. Interact. Learn. Environ. 2025, 33, 2781–2807. [Google Scholar] [CrossRef]
  42. Ma, W.; Ma, W.; Hu, Y.; Bi, X. The who, why, and how of ai-based chatbots for learning and teaching in higher education: A systematic review. Educ. Inf. Technol. 2025, 30, 7781–7805. [Google Scholar] [CrossRef]
  43. Adamopoulou, E.; Moussiades, L. Chatbots: History, technology, and applications. Mach. Learn. Appl. 2020, 2, 100006. [Google Scholar] [CrossRef]
  44. Huang, A.; Chao, Y.; De La Mora Velasco, E.; Bilgihan, A.; Wei, W. When artificial intelligence meets the hospitality and tourism industry: An assessment framework to inform theory and management. J. Hosp. Tour. Insights 2022, 5, 1080–1100. [Google Scholar] [CrossRef]
  45. Bouhia, M.; Rajaobelina, L.; PromTep, S.; Arcand, M.; Ricard, L. Drivers of privacy concerns when interacting with a chatbot in a customer service encounter. Int. J. Bank. Mark. 2022, 40, 1159–1181. [Google Scholar] [CrossRef]
  46. Beattie, A.; Edwards, A.P.; Edwards, C. A Bot and a Smile: Interpersonal Impressions of Chatbots and Humans Using Emoji in Computer-mediated Communication. Commun. Stud. 2020, 71, 409–427. [Google Scholar] [CrossRef]
  47. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. J. Clin. Epidemiol. 2021, 134, 178–189. [Google Scholar] [CrossRef] [PubMed]
  48. Paul, J.; Lim, W.M.; O’Cass, A.; Hao, A.W.; Bresciani, S. Scientific Procedures and Rationales for Systematic Literature Reviews (SPAR-4-SLR). Int. J. Consum. Stud. 2021, 45, O1–O16. [Google Scholar] [CrossRef]
  49. Gohoungodji, P.; N’Dri, A.B.; Latulippe, J.M.; Matos, A.L.B. What is stopping the automotive industry from going green? A systematic review of barriers to green innovation in the automotive industry. J. Clean. Prod. 2020, 277, 123524. [Google Scholar] [CrossRef]
  50. Nordheim, C.B.; Følstad, A.; Bjørkli, C.A. An Initial Model of Trust in Chatbots for Customer Service—Findings from a Questionnaire Study. Interact. Comput. 2019, 31, 317–335. [Google Scholar] [CrossRef]
  51. Lappeman, J.; Marlie, S.; Johnson, T.; Poggenpoel, S. Trust and digital privacy: Willingness to disclose personal information to banking chatbot services. J. Financ. Serv. Mark. 2023, 28, 337–357. [Google Scholar] [CrossRef]
  52. Yuriev, A.; Boiral, O.; Francoeur, V.; Paillé, P. Overcoming the barriers to pro-environmental behaviors in the workplace: A systematic review. J. Clean. Prod. 2018, 182, 379–394. [Google Scholar] [CrossRef]
  53. Gagnon, J.; Halilem, N.; Bouchard, J. A relay race or an ironman? A systematic review of the literature on inno-vation in the mining sector. Resour. Policy 2024, 98, 105363. [Google Scholar] [CrossRef]
  54. Adam, M.; Wessel, M.; Benlian, A. AI-based chatbots in customer service and their effects on user compliance. Electron. Mark. 2021, 31, 427–445. [Google Scholar] [CrossRef]
  55. Araujo, T. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput. Hum. Behav. 2018, 85, 183–189. [Google Scholar] [CrossRef]
  56. Chung, M.; Ko, E.; Joung, H.; Kim, S.J. Chatbot e-service and customer satisfaction regarding luxury brands. J. Bus. Res. 2020, 117, 587–595. [Google Scholar] [CrossRef]
  57. Hill, J.; Randolph Ford, W.; Farreras, I.G. Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Comput. Hum. Behav. 2015, 49, 245–250. [Google Scholar] [CrossRef]
  58. Go, E.; Sundar, S.S. Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Comput. Hum. Behav. 2019, 97, 304–316. [Google Scholar] [CrossRef]
  59. Fryer, L.K.; Ainley, M.; Thompson, A.; Gibson, A.; Sherlock, Z. Stimulating and sustaining interest in a language course: An experimental comparison of Chatbot and Human task partners. Comput. Hum. Behav. 2017, 75, 461–468. [Google Scholar] [CrossRef]
  60. Mou, Y.; Xu, K. The media inequality: Comparing the initial human-human and human-AI social interactions. Comput. Hum. Behav. 2017, 72, 432–440. [Google Scholar] [CrossRef]
  61. Zaki, H.S.; Al-Romeedy, B.S. Chatbot symbolic recovery and customer forgiveness: A moderated mediation model. J. Hosp. Tour. Technol. 2024, 15, 610–628. [Google Scholar] [CrossRef]
  62. Hasan, R.; Shams, R.; Rahman, M. Consumer trust and perceived risk for voice-controlled artificial intelligence: The case of Siri. J. Bus. Res. 2021, 131, 591–597. [Google Scholar] [CrossRef]
  63. Pillai, R.; Ghanghorkar, Y.; Sivathanu, B.; Algharabat, R.; Rana, N.P. Adoption of artificial intelligence (AI) based employee experience (EEX) chatbots. Inf. Technol. People 2024, 37, 449–478. [Google Scholar] [CrossRef]
  64. Skjuve, M.; Følstad, A.; Fostervold, K.I.; Brandtzaeg, P.B. My Chatbot Companion—A Study of Human-Chatbot Re-lationships. Int. J. Hum.–Comput. Stud. 2021, 149, 102601. [Google Scholar] [CrossRef]
  65. Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, S.M.C. I, Chatbot: Modeling the determinants of users’ satisfaction and con-tinuance intention of AI-powered service agents. Telemat. Inform. 2020, 54, 101473. [Google Scholar] [CrossRef]
  66. Meyer-Waarden, L.; Pavone, G.; Poocharoentou, T.; Prayatsup, P.; Ratinaud, M.; Tison, A.; Torné, S. How Service Quality Influences Customer Acceptance and Usage of Chatbots? J. Serv. Manag. Res. 2020, 4, 35–51. [Google Scholar] [CrossRef]
  67. Mostafa, R.B.; Kasamani, T. Antecedents and consequences of chatbot initial trust. Eur. J. Mark. 2022, 56, 1748–1771. [Google Scholar] [CrossRef]
  68. Ameen, N.; Tarhini, A.; Reppel, A.; Anand, A. Customer experiences in the age of artificial intelligence. Comput. Hum. Behav. 2021, 114, 106548. [Google Scholar] [CrossRef]
  69. De Cicco, R.; Silva, S.C.; Alparone, F.R. Millennials’ attitude toward chatbots: An experimental study in a social relationship perspective. Int. J. Retail. Distrib. Manag. 2020, 48, 1213–1233. [Google Scholar] [CrossRef]
  70. Pizzi, G.; Scarpi, D.; Pantano, E. Artificial intelligence and the new forms of interaction: Who has the control when interacting with a chatbot? J. Bus. Res. 2021, 129, 878–890. [Google Scholar] [CrossRef]
  71. Yen, C.; Chiang, M.C. Trust me, if you can: A study on the factors that influence consumers’ purchase intention triggered by chatbots based on brain image evidence and self-reported assessments. Behav. Inf. Technol. 2021, 40, 1177–1194. [Google Scholar] [CrossRef]
  72. Cheng, X.; Bao, Y.; Zarifis, A.; Gong, W.; Mou, J. Exploring consumers’ response to text-based chatbots in e-commerce: The moderating role of task complexity and chatbot disclosure. Internet Res. 2022, 32, 496–517. [Google Scholar] [CrossRef]
  73. Zungu, N.P.; Amegbe, H.; Hanu, C.; Asamoah, E.S. AI-driven self-service for enhanced customer experience out-comes in the banking sector. Cogent Bus. Manag. 2025, 12, 2450295. [Google Scholar] [CrossRef]
  74. Zhang, K.; Luo, J.; Huang, Q.; Zhang, K.; Du, J. The Effect of Perceived Interactivity on Continuance Intention to Use AI Conversational Agents: A Two-Stage Hybrid PLS-ANN Approach. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 255. [Google Scholar] [CrossRef]
  75. Wang, X.; Lin, X.; Shao, B. Artificial intelligence changes the way we work: A close look at innovating with chat-bots. J. Assoc. Inf. Sci. Technol. 2023, 74, 339–353. [Google Scholar] [CrossRef]
  76. Pillai, R.; Sivathanu, B. Adoption of AI-based chatbots for hospitality and tourism. Int. J. Contemp. Hosp. Manag. 2020, 32, 3199–3226. [Google Scholar] [CrossRef]
  77. Melián-González, S.; Gutiérrez-Taño, D.; Bulchand-Gidumal, J. Predicting the intentions to use chatbots for travel and tourism. Curr. Issues Tour. 2021, 24, 192–210. [Google Scholar] [CrossRef]
  78. Balakrishnan, J.; Abed, S.S.; Jones, P. The role of meta-UTAUT factors, perceived anthropomorphism, perceived intelligence, and social self-efficacy in chatbot-based services? Technol. Forecast. Soc. Change 2022, 180, 121692. [Google Scholar] [CrossRef]
  79. Aslam, W.; Ahmed Siddiqui, D.; Arif, I.; Farhat, K. Chatbots in the frontline: Drivers of acceptance. Kybernetes 2022, 52, 3781–3810. [Google Scholar] [CrossRef]
  80. Silva, S.C.; De Cicco, R.; Vlačić, B.; Elmashhara, M.G. Using chatbots in e-retailing—How to mitigate perceived risk and enhance the flow experience. Int. J. Retail. Distrib. Manag. 2023, 51, 285–305. [Google Scholar] [CrossRef]
  81. Chhikara, D.; Sharma, R.; Kaushik, K. Indian E-commerce consumer and their acceptance towards chatbots. Acad. Mark. Stud. J. 2022, 26, 1–10. [Google Scholar]
  82. Pereira, T.; Limberger, P.F.; Minasi, S.M.; Buhalis, D. New Insights into Consumers’ Intention to Continue Using Chatbots in the Tourism Context. J. Qual. Assur. Hosp. Tour. 2022, 25, 754–780. [Google Scholar] [CrossRef]
  83. Van den Broeck, E.; Zarouali, B.; Poels, K. Chatbot advertising effectiveness: When does the message get through? Comput. Hum. Behav. 2019, 98, 150–157. [Google Scholar] [CrossRef]
  84. Rizomyliotis, I.; Kastanakis, M.N.; Giovanis, A.; Konstantoulaki, K.; Kostopoulos, I. “How mAy I help you today?” The use of AI chatbots in small family businesses and the moderating role of customer affective commitment. J. Bus. Res. 2022, 153, 329–340. [Google Scholar] [CrossRef]
  85. Gopinath, K.; Kasilingam, D. Antecedents of intention to use chatbots in service encounters: A meta-analytic review. Int. J. Consum. Stud. 2023, 47, 2367–2395. [Google Scholar] [CrossRef]
  86. Ragheb, M.A.; Tantawi, P.; Farouk, N.; Hatata, A. Investigating the ac-ceptance of applying chat-bot (Artificial intelligence) technology among higher education students in Egypt. Int. J. High. Educ. Manag. 2022, 8, 1–13. [Google Scholar] [CrossRef]
  87. Paraskevi, G.; Saprikis, V.; Avlogiaris, G. Modeling Nonusers’ Behavioral Intention towards Mobile Chatbot Adoption: An Extension of the UTAUT2 Model with Mobile Service Quality Determinants. Hum. Behav. Emerg. Technol. 2023, 2023, 8859989. [Google Scholar] [CrossRef]
  88. Pelau, C.; Dabija, D.C.; Ene, I. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the ser-vice industry. Comput. Hum. Behav. 2021, 122, 106855. [Google Scholar] [CrossRef]
  89. Schuetzler, R.M.; Grimes, G.M.; Scott Giboney, J. The impact of chatbot conversational skill on engagement and perceived humanness. J. Manag. Inf. Syst. 2020, 37, 875–900. [Google Scholar] [CrossRef]
  90. Alharbi, N.; Ud Din, F.; Paul, D.; Sadgrove, E. Driving AI chatbot adoption: A systematic review of factors, barriers, and future research directions. J. Open Innov. Technol. Mark. Complex. 2025, 11, 100590. [Google Scholar] [CrossRef]
  91. Lee, S.; Lee, N.; Sah, Y.J. Perceiving a Mind in a Chatbot: Effect of Mind Perception and Social Cues on Co-presence, Closeness, and Intention to Use. Int. J. Hum.–Comput. Interact. 2020, 36, 930–940. [Google Scholar] [CrossRef]
  92. Shumanov, M.; Johnson, L. Making conversations with chatbots more personalized. Comput Hum Behav 2021, 117, 106627. [Google Scholar] [CrossRef]
  93. Rajaobelina, L.; Prom Tep, S.; Arcand, M.; Ricard, L. Creepiness: Its antecedents and impact on loyalty when interacting with a chatbot. Psychol. Mark. 2021, 38, 2339–2356. [Google Scholar] [CrossRef]
  94. Mehra, B. Chatbot personality preferences in Global South urban English speakers. Soc. Sci. Humanit. Open 2021, 3, 100131. [Google Scholar] [CrossRef]
  95. Drouin, M.; Sprecher, S.; Nicola, R.; Perkins, T. Is chatting with a sophisticated chatbot as good as chatting online or FTF with a stranger? Comput. Hum. Behav. 2022, 128, 107100. [Google Scholar] [CrossRef]
  96. Zarouali, B.; Makhortykh, M.; Bastian, M.; Araujo, T. Overcoming polarization with chatbot news? Investigating the impact of news content containing opposing views on agreement and credibility. Eur. J. Commun. 2021, 36, 53–68. [Google Scholar] [CrossRef]
  97. Klein, K.; Martinez, L.F. The impact of anthropomorphism on customer satisfaction in chatbot commerce: An experimental study in the food sector. Electron. Commer. Res. 2023, 23, 2789–2825. [Google Scholar] [CrossRef]
  98. Shen, W.; Li, S. Influence of the Use of Emojis by Chatbots on Interaction Satisfaction. J. Mark. Dev. Compet. 2025, 19, 17. [Google Scholar] [CrossRef]
  99. Biloš, A.; Budimir, B. Understanding the Adoption Dynamics of ChatGPT among Generation Z: Insights from a Modified UTAUT2 Model. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 863–879. [Google Scholar] [CrossRef]
  100. Al-Shafei, M. Navigating Human-Chatbot Interactions: An Investigation into Factors Influencing User Satis-faction and Engagement. Int. J. Hum.–Comput. Interact. 2025, 41, 411–428. [Google Scholar] [CrossRef]
  101. Mou, Y.; Meng, X. Alexa, it is creeping over me—Exploring the impact of privacy concerns on consumer resistance to intelligent voice assistants. Asia Pac. J. Mark. Logist. 2024, 36, 261–292. [Google Scholar] [CrossRef]
  102. Alabed, A.; Javornik, A.; Gregory-Smith, D.; Casey, R. More than just a chat: A taxonomy of consumers’ relation-ships with conversational AI agents and their well-being implications. Eur. J. Mark. 2024, 58, 373–409. [Google Scholar] [CrossRef]
  103. Kwangsawad, A.; Jattamart, A. Overcoming customer innovation resistance to the sustainable adoption of chatbot services: A community-enterprise perspective in Thailand. J. Innov. Knowl. 2022, 7, 100211. [Google Scholar] [CrossRef]
  104. Cheng, Y.; Jiang, H. How Do AI-driven Chatbots Impact User Experience? Examining Gratifications, Perceived Privacy Risk, Satisfaction, Loyalty, and Continued Use. J. Broadcast. Electron. Media 2020, 64, 592–614. [Google Scholar] [CrossRef]
  105. Li, B.; Chen, Y.; Liu, L.; Zheng, B. Users’ intention to adopt artificial intelligence-based chatbot: A meta-analysis. Serv. Ind. J. 2023, 43, 1117–1139. [Google Scholar] [CrossRef]
  106. Magno, F.; Dossena, G. The effects of chatbots’ attributes on customer relationships with brands: PLS-SEM and importance–performance map analysis. TQM J. 2022, 35, 1156–1169. [Google Scholar] [CrossRef]
  107. Zhang, B.; Zhu, Y.; Deng, J.; Zheng, W.; Liu, Y.; Wang, C.; Zeng, R. “I Am Here to Assist Your Tourism”: Predicting Con-tinuance Intention to Use AI-based Chatbots for Tourism. Does Gender Really Matter? Int. J. Hum.–Comput. Interact. 2023, 39, 1887–1903. [Google Scholar] [CrossRef]
  108. Stritch, J.M.; Pedersen, M.J.; Taggart, G. The Opportunities and Limitations of Using Mechanical Turk (MTURK) in Public Administration and Management Scholarship. Int. Public Manag. J. 2017, 20, 489–511. [Google Scholar] [CrossRef]
  109. Ramesh, A.; Chawla, V. Chatbots in Marketing: A Literature Review Using Morphological and Co-Occurrence Analyses. J. Interact. Mark. 2022, 57, 472–496. [Google Scholar] [CrossRef]
  110. Calvaresi, D.; Ibrahim, A.; Calbimonte, J.P.; Schegg, R.; Fragniere, E.; Schumacher, M. The Evolution of Chatbots in Tourism: A Systematic Literature Review. In Information and Communication Technologies in Tourism; Wörndl, W., Koo, C., Stienmetz, J.L., Eds.; Springer International Publishing: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  111. Alsharhan, A.; Al-Emran, M.; Shaalan, K. Chatbot Adoption: A Multiperspective Systematic Review and Future Research Agenda. IEEE Trans. Eng. Manag. 2023, 71, 10232–10244. [Google Scholar] [CrossRef]
  112. Yatawara, K.; Sampath, T.; Kalupahana, P.L.; Rathnayake, S.; Jayasuriya, N.; Rathnayake, N. A Systematic Review on Consumer Adoption of AI-driven Chatbots. Vis. J. Bus. Perspect. 2025, 2, 1–14. [Google Scholar] [CrossRef]
  113. Sam, S.J.I.; Jasim, K.M. Diving into the technology: A systematic literature review on strategic use of chatbots in hospitality service encounters. Manag. Rev. Q. 2025, 75, 527–555. [Google Scholar] [CrossRef]
  114. Jasim, K.M.; Malathi, A.; Bhardwaj, S.; Aw, E.C.X. A systematic review of AI-based chatbot usages in healthcare services. J. Health Organ. Manag. 2025, 39, 877–899. [Google Scholar] [CrossRef] [PubMed]
  115. Zhang, F.; Li, Y.; Sheng, D. From bias to belief: A meta-analysis of user trust in chatbot adoption and its antecedents. J. Electron. Commer. Res. 2025, 25, 266–305. [Google Scholar]
  116. Bakkouri, B.E.; Raki, S.; Belgnaoui, T. The Role of Chatbots in Enhancing Customer Experience: Literature Review. Procedia Comput. Sci. 2022, 203, 432–437. [Google Scholar] [CrossRef]
Figure 1. PRISMA-based model of the article selection process. PRISMA, Preferred Reporting Items for Systematic reviews and Meta-Analyses.
Figure 1. PRISMA-based model of the article selection process. PRISMA, Preferred Reporting Items for Systematic reviews and Meta-Analyses.
Jtaer 21 00098 g001
Figure 2. Evolution of number of publications from 2015 to 2025.
Figure 2. Evolution of number of publications from 2015 to 2025.
Jtaer 21 00098 g002
Figure 3. Methods of selected manuscripts.
Figure 3. Methods of selected manuscripts.
Jtaer 21 00098 g003
Figure 4. Example of Study Sectors.
Figure 4. Example of Study Sectors.
Jtaer 21 00098 g004
Figure 5. Methods of data analysis (empirical studies).
Figure 5. Methods of data analysis (empirical studies).
Jtaer 21 00098 g005
Table 1. Search string.
Table 1. Search string.
CategoriesKeywords
Category 1: (“chatbot” OR “chat bot” OR “virtual assistant” OR “chatterbot” OR “conversational agent” OR “natural language interface” OR “talkbot” OR “talk bot” OR “AI assistant” OR “artificial intelligence assistant” OR “automated assistant” OR “intelligent assistant” OR “intelligent virtual agent” OR “AI-powered assistant” OR “automated conversational system” OR “conversational AI” OR “service chatbot” OR “customer service chatbot” OR “intelligent agent” OR “digital assistant” OR “voice assistant” OR “embodied conversational agent” OR “ECA” OR “dialogue system”)
Category 2:
Consumer and psychology
(“consumer” OR “user” OR “customer” OR “experience” OR “attitude” OR “emotion” OR “interaction” OR “conversation” OR “acceptation” OR “adoption” OR “technology adoption” OR “use intention” OR “behavioral intention” OR “continuance intention” OR “post-adoption” OR “continued use” OR “customer satisfaction” OR “user engagement” OR “customer engagement” OR “service experience”)
Category 3:
Consumer behavior and technology adoption theories
(“theory of planned behavior” OR “TPB” OR “TAM” OR “technology acceptance model” OR “TTF” OR “task technology fit” OR “UTAUT” OR “theory of acceptance and usage of technology” OR “ITM” OR “initial trust model” OR “ELM” OR “elaboration likelihood model” OR “TRA” OR “theory of reasoned action” OR “IDT” OR “innovation diffusion theory” OR “SOR” OR “stimulus organism-response” OR “expectation-confirmation model” OR “anthropomorphism” OR “theory-of-mind” OR “U&G model” OR “users and gratification model” OR “trust-commitment theory” OR “consumer acceptance model” OR “diffusion of innovation theory” OR “technology readiness” OR “emotion” OR “emotional response” OR “affect” OR “affective response” OR “social presence” OR “perceived warmth” OR “perceived intelligence” OR “humor” OR “perceived humor” OR “empathy” OR “perceived empathy” OR “anthropomorphism” OR “human-likeness” OR “humanlike” OR “mind perception” OR “uncanny valley” OR “parasocial interaction” OR “symbolic interaction” OR “emotional attachment” OR “relational warmth”)
Table 2. List of the journals featuring two or more articles on Chatbot adoption.
Table 2. List of the journals featuring two or more articles on Chatbot adoption.
JournalsFrequencyImpact Factor (2025)Subject Area
International Journal of Human–Computer Interaction214.7Management Information System
Computers in Human Behavior139.9Management Information System
Journal of Theoretical and Applied Electronic Commerce Research114.6Management Information System
Journal of Business Research611.3Marketing
Journal of Retailing and Consumer Services510.4Marketing
Education and Information Technologies45.4Information Technology
European Journal of Marketing35.2Marketing
Journal of Service Management311.8Management
Electronic Markets28.5Management Information System
International Journal of Bank Marketing26.7Marketing
Psychology and Marketing24.9Marketing
Other journals54 Many Subjects Areas
Table 3. Top 10 most-cited articles.
Table 3. Top 10 most-cited articles.
RankStudyJournal
(Impact Factor)
ABI/Inform Global, Business Source Premier, and Web of Science (Citations)Type of StudyKeywords
1. [54]Electronic Markets
(8.5)
1723QuantitativeArtificial intelligence, chatbot, anthropomorphism, social presence, compliance, customer service
2. [1]Marketing Science
(5.4)
1637QuantitativeArtificial intelligence, chatbot, conversational commerce, new technology
3. [55]Computers in Human Behavior (9.9)1571QuantitativeDisembodied conversational agents, chatbots, service encounters, social presence, anthropomorphism
4. [56]Journal of Business Research
(11.3)
1382QuantitativeChatbot, communication, digital marketing, luxury brand, service agents
5. [57]Computers in Human Behavior (9.9)1371QuantitativeCMC, instant messaging, IM chatbot, cleverbot
6.[58]Computers in Human Behavior (9.9)1259QuantitativeOnline chat agents, message interactivity, identity cue, anthropomorphic visual cue, compensation effect
7. [7]Digital Health
(4.7)
941QualitativeAcceptability, AI, artificial intelligence, bot, chatbot
8.[4]Future Generation Computer Systems (7.3)935QuantitativeHuman–computer interactions, chatbots, affective computing, psychophysiology
9. [59]Computers in Human Behavior (9.9)623QuantitativeCMC, interest, novelty effect, education, technology
10. [60]Computers in Human Behavior (9.9)491QuantitativeHuman–machine communication, computers are social actors’ paradigm, cognitive-affective processing system, artificial intelligence, chatbot
Table 4. Quantitative Study Samples.
Table 4. Quantitative Study Samples.
Sample SizesNumber
[0 to 150]32
[151 to 300]57
[301 to 500]49
[501 to 800]35
[801 to 1000]4
Table 5. Qualitative Study Samples.
Table 5. Qualitative Study Samples.
Sample SizesNumber
[0 to 20]10
[21 to 40]11
[41 to 60]4
[61 to 100]2
[101 to 200]1
Table 6. Distribution by country.
Table 6. Distribution by country.
CountryFrequencyCountryFrequencyCountryFrequency
China35Italy5Norway2
Multi-Countries25Saudi Arabia4Lebanon2
USA20Australia4Germany2
India18Portugal4Thailand2
UK13Spain3Vietnam2
Korea10Croatia3Turkey2
Holland9Sri Lanka3Pakistan2
Malaysia6Canada3Singapore2
Table 7. Frequency of use of different theoretical models.
Table 7. Frequency of use of different theoretical models.
TheoriesFrequenciesTheoriesFrequencies
Anthropomorphism59Social Exchange Theory1
Technology-acceptance Model (TAM)49sRAM1
Unified theory of acceptance and use of technology (including META-UTAUT and UTAUT2)26Social Representation Theory1
Social-presence theory11Social Penetration Theory1
Expectation-confirmation Model7Social Impact Theory1
SOR7Similarity attraction theory1
CASA6Human–Machine Communication Theory1
U&G Model4UX honeycomb model1
Information Systems Success Model (ISS)3Theory of Mind1
AIDUA3HVL Model1
Language-based models3Expectancy Violation Theory1
Trust-commitment Theory3Theory of Perceived Value1
SERVQUAL3Elaboration-Likelihood Model1
Consumer-acceptance Model2Elaboration-Likelihood Model1
Diffusion of Innovation Theory (DOI)2SEEK Model1
Human–Computer Interaction2Task-fit Technology1
Affordance Theory2
Human–Computer Interaction2Attachment Theory1
Trust Theory2Behavioral Reasoning Theory1
Heuristic Systematic Model2Technology Affordance Constraints Theory1
ICT Adoption Framework1Customer Experience Theory1
Cognitive Load Theory1Trust-Risk Dual Pathway 1
Social Identity Theory1Self-determination Theory1
Dual Factor Theory1Social Cognition Theory1
Customer Experience Theory1Decomposed Theory of Planned
Behavior
1
Stimulus-Organism-Response (SOR)1Stress Coping Theory1
Theory of Planned Behavior 1Parasocial Relationship Theory1
Table 9. Example of studies for the Anthropomorphism perspective.
Table 9. Example of studies for the Anthropomorphism perspective.
AntecedentDependent VariableNumber of
Empirical Studies
Effects
Significance
Study
Perceived anthropomorphic cues and perceived empathy of chatbotAcceptance towards chatbot11[88]
Anthropomorphic cuesN/A (2)Review paper[90]
Anthropomorphic cues (mind perception)Intention of using chatbot (via closeness)11[91]
Anthropomorphic cuesSatisfaction with chatbot10 (NS) (1)[70]
Anthropomorphic cuesIntention of using chatbot10 (NS) (1)[9]
Anthropomorphic cuesCompliance with chatbot request11[54]
Social PresencePerceived humanness11[89]
Anthropomorphic cues and perceived enjoymentAcceptance towards chatbot11[32]
Anthropomorphic cuesChatbot adoption intention11[6]
Visual Anthropomorphic CuesChatbot adoption intention11[58]
Chatbot Personality (tailored to consumer’s personality)Chatbot adoption11[92]
Anthropomorphic cuesTrust in chatbots11[71]
Anthropomorphic creepiness-like traitsLoyalty to chatbots11[93]
Anthropomorphic creepiness-like traits and perceived technological riskPrivacy concerns when interacting with chatbot11[45]
Anthropomorphic cues and social presenceCustomer experience with chatbot11[84]
Anthropomorphic-like traits that emulate a friendly personalityAcceptance towards chatbots11[94]
Anthropomorphic cuesAcceptance towards chatbots11[76]
Anthropomorphic cuesChatbot usage intention11[77]
Anthropomorphic cuesEmotional connection towards the company11[55]
Anthropomorphic cuesChatbot adoption11[95]
Anthropomorphic cuesChatbot continuation intention11[78]
Anthropomorphic cuesAcceptance towards chatbot’s message11[96]
Anthropomorphic cuesChatbot perceived competence11[70]
Anthropomorphic cuesUtilitarian attitude to use chatbots11[2]
Anthropomorphic cuesAttitude towards chatbot11[97]
1 NS = non-significant 2 NA = not available.
Table 10. Robust vs. Context-Dependent Drivers of Chatbot Adoption.
Table 10. Robust vs. Context-Dependent Drivers of Chatbot Adoption.
PredictorEmpirical Evidence in Reviewed StudiesRobustness ClassificationTypical Countries StudiedTypical Contexts Where Effects Are StrongerTypical Moderating Conditions
Perceived usefulness39 empirical studies; all significantRobust predictorChina, USA, Germany and UKAcross sectors (tourism, retail, finance, education)Strongest during pre-adoption evaluation stages
Perceived ease of use39 empirical studies; 38 significantRobust predictorChina, India and UKE-commerce and service chatbot contextsStronger for inexperienced users and student samples
Performance expectancy22 studies; 21 significantRobust predictorChina, Korea and USADigital service contexts and transactional chatbotsMore important in utilitarian task-oriented interactions
Trust in chatbot13 studies; all significantRobust predictorChina, Korea and USABanking, insurance, financial servicesStronger in high-risk service contexts
Anthropomorphism59 studies; 54 significant, 5 non-significantContext-dependent predictorChina, UK, USA and IndiaRetail, hospitality, conversational commerceEffects vary by chatbot design (text vs. voice) and degree of human-likeness
Emotional engagement/enjoyment21 studies; 19 significantContext-dependent predictorUKHedonic service contextsStronger for younger consumers (Gen Z)
Privacy concerns/perceived risk13 studies; all significant negative effectsContext-dependent barrierChina, USA and EuropeFinance, healthcare, insuranceStronger when personal data disclosure is required
Technological anxiety4 studies; mixed resultsContext-dependent barrierIndia and ChinaLow technology readiness populationsModerates relationships between chatbot quality and satisfaction
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Latulippe, J.-M.; Ladhari, R. Chatbot Adoption: A Systematic Literature Review. J. Theor. Appl. Electron. Commer. Res. 2026, 21, 98. https://doi.org/10.3390/jtaer21040098

AMA Style

Latulippe J-M, Ladhari R. Chatbot Adoption: A Systematic Literature Review. Journal of Theoretical and Applied Electronic Commerce Research. 2026; 21(4):98. https://doi.org/10.3390/jtaer21040098

Chicago/Turabian Style

Latulippe, Jean-Michel, and Riadh Ladhari. 2026. "Chatbot Adoption: A Systematic Literature Review" Journal of Theoretical and Applied Electronic Commerce Research 21, no. 4: 98. https://doi.org/10.3390/jtaer21040098

APA Style

Latulippe, J.-M., & Ladhari, R. (2026). Chatbot Adoption: A Systematic Literature Review. Journal of Theoretical and Applied Electronic Commerce Research, 21(4), 98. https://doi.org/10.3390/jtaer21040098

Article Metrics

Back to TopTop