Next Article in Journal
After-Sales Attributes in E-Commerce: A Systematic Literature Review and Future Research Agenda
Previous Article in Journal
The Social Impact from Danmu—Insights from Esports Online Videos
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Chatbot-Based Services: A Study on Customers’ Reuse Intention

Filipe Araújo Silva
Alireza Shabani Shojaei
1 and
Belem Barbosa
School of Economics and Management, University of Porto, 4200-464 Porto, Portugal
Research Unit on Governance, Competitiveness and Public Policies (GOVCOPP), Center of Economics and Finance at UPorto (cef.up), School of Economics and Management, University of Porto, 4200-464 Porto, Portugal
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2023, 18(1), 457-474;
Submission received: 13 December 2022 / Revised: 21 February 2023 / Accepted: 27 February 2023 / Published: 1 March 2023
(This article belongs to the Section Digital Marketing and the Connected Consumer)


The main objective of this article is to investigate the factors that influence customers’ intention to reuse chatbot-based services. The study employs a combination of the technology acceptance model (TAM) with other contributions in the literature to develop a theoretical model that predicts and explains customers’ intention to reuse chatbots. The research uses structural equation modeling (PLS-SEM) to test the proposed hypotheses. Data collected from 201 chatbot users among Portuguese consumers were analyzed, and the results showed that user satisfaction, perceived usefulness, and subjective norm are significant predictors of chatbot reuse intentions. Additionally, the findings indicated that perceived usefulness, perceived ease of use, and trust have a positive impact on attitudes toward using chatbots. Trust was found to have a significant impact on perceived usefulness, user satisfaction, and attitudes toward using chatbots. However, there was no significant effect of attitude toward using chatbots, perceived ease of use, trust, and perceived social presence on reuse intentions. The article concludes with theoretical contributions and recommendations for managers.

1. Introduction

The increasing amount of time people spend on the internet and advancements in technology have significantly impacted lifestyles and the business environment [1]. As a result, consumer behavior is constantly evolving, and companies are utilizing various digital marketing strategies to meet customer demands in the digital environment [2]. Companies have adopted and implemented information and communication technologies (ICTs) to enhance their relationship channels and make them constantly available to assist customers [1,3,4]. These technologies also allow customers to spend less time on requested services, improve resource management efficiency within organizations, and allow employees to focus on other activities [5].
Over the last decade, artificial intelligence (AI) has gained importance in the fields of marketing and business [6], namely due to the availability of low-cost but powerful computing and big data. The evolution of AI is enabling companies to utilize chat services for customer support. In fact, there is a growing interest among managers to implement chatbots for automating processes related to customer relationship services [7,8]. There is clear evidence that chatbots are applied by a wide range of industries, including e-commerce [2], hospitality [3,4,9], fashion [10], health services [11,12], banking and financial services [13,14,15,16,17,18,19], to name but a few. Chatbots offer a permanent touchpoint opportunity. With 24/7 availability and the ability to handle large volumes of inquiries, chatbots can efficiently and effectively assist customers with their needs. This helps create positive customer experiences [20,21], as by offering quick and convenient access to information and support, chatbots can enhance customer satisfaction and build loyalty. Currently, more than 50 percent of firms are adopting or planning to adopt chatbot technology [8], and it is estimated that by 2025, approximately 95 percent of online service interactions will be utilizing AI chatbots or live chat [22]. Fokina [23] estimates that 88 percent of customers have had at least one conversation experience with a chatbot. This increasing popularity and usage of chatbots is projected to generate $112 billion in retail sales by 2023 [24]. Despite the growing popularity of chatbots, negative and frustrating experiences have also been reported [25,26]. Sladden [27] argues that customers are losing trust in chatbots, with 42 percent of customers avoiding chatbots for complex inquiries and 15 percent of customers reporting a lack of confidence in using chatbots to communicate with companies. This has led to growing attention among scholars regarding the application of AI [28,29,30], including chatbots. However, research on the acceptance of chatbots is limited [17], and few studies have investigated users’ behavior related to chatbots [31].
As defined by Rahane et al. [32], “a chatbot (also known as Chatterbox) is a computer program which conducts communication between a human and machine, commonly through aural or textual methods” (p.1). The existing literature on chatbots demonstrates that they are considered an important technological trend [33] and have been studied for factors that influence user acceptance and adoption, e.g., [32,33]. However, the literature tends to focus on the technical aspects of chatbot development and has not sufficiently studied the users’ perspective. The literature suggests that future research should explore users’ characteristics beyond basic factors such as gender and age, which have already been covered [34]. While keeping this in mind, this article aims to understand the main determinants of chatbot reuse intention. As explained below, it combines theories and proposes a conceptual model to further understand this phenomenon.
Over the years, many researchers have developed models to study the acceptance and use of new technologies [35]. One of the most widely used models in this field is the technology acceptance model (TAM), which has been used as a central theoretical framework to predict, explain, and examine the willingness to use technology systems and the probability of adopting new technology [36,37]. Similar to what is posited by the theory of reasoned action (TRA) [38], TAM, introduced by Davis [39], suggests that attitude towards behavior is the main determinant of behavioral intention. Additionally, TAM suggests that perceived usefulness and perceived ease of use are determinants of attitude [39]. The literature shows that both TRA and TAM are versatile models that enable the inclusion of complementary constructs to explain consumer behavior. Studies conducted on technology adoption often consider additional independent variables to better understand attitudes and intentions [40]. For example, Venkatesh and Davis [41] proposed an extension to TAM, named TAM2, that included additional theoretical constructs covering two main groups of processes: social influence processes, which comprise factors that can lead an individual to adopt a system for social reasons, such as subjective norm, and cognitive instrumental processes, which explore the perceived usefulness determinants (e.g., job relevance, output quality). Other researchers have adapted TAM and added new theoretically justified factors, such as trust, and external antecedents, such as situational involvement, to the model [36].
One variable that has been overlooked in understanding the intention to use technology is satisfaction, even though an important stream of research has shown that satisfaction is a crucial variable in understanding consumers’ intentions [42,43], particularly to further understand post-adoption reactions and explanations of the continued use of information systems [13,44]. For instance, satisfaction is pointed out by the expectation confirmation model (ECM) as one major determinant of usage continuance [44].
In the context of chatbots, Ashfaq et al. [45] argue that few studies have explored the impact of low satisfaction on the intention to reuse chatbot-based services. Additionally, trust and perceived social presence are two variables that stand out in chatbot-based literature [18] and were also considered for this research. Jiang et al. [46] suggested that there is a need to study individuals’ trust in the use of chatbots. In order to address this gap, it is relevant to conduct an empirical study on the intention to reuse chatbot-based services that consider user satisfaction, trust, and perceived social presence.
This article makes several contributions to the field of chatbot research. Firstly, it addresses a new and under-researched topic by examining the determinants of chatbot reuse intentions, which complements the current literature on chatbot acceptance. Secondly, the article adopts a comprehensive model to explain reuse intentions. It extends the technology acceptance model by integrating satisfaction, trust, and perceived social presence to further explain reuse intentions. The model demonstrated robustness and provided relevant perspectives that can be adapted for future studies. Thirdly, the article includes empirical data from consumers with prior experience with chatbots. The findings confirmed that satisfaction, perceived usefulness, and subjective norm are the key factors that determined reuse intention, while the effects of trust and attitude were found to be non-significant. As such, the article’s insights provide valuable implications for designing and implementing chatbot-based services that encourage sustainable technology reuse behavior. Overall, the article’s insights offer valuable implications for future research and for designing and implementing chatbot-based services that encourage sustainable technology reuse behavior.
The remainder of the article is organized as follows. The literature review and hypotheses development are presented in Section 2, the methodology adopted in the empirical study is presented in Section 3, and the results are presented in Section 4. The article then includes a discussion of findings in Section 5, and Section 6 and Section 7 present implications, limitations, and future research directions.

2. Literature Review and Hypotheses Development

2.1. Trust

The literature suggests that trust plays a crucial role in technology adoption [47,48,49]. Trust occurs when an individual is confident that their vulnerabilities will not be exploited in a risky online situation [50,51]. Consequently, trust is associated with privacy protection [52]. Trust is an important factor in online services, for instance, in the context of e-commerce [53,54,55]. Specifically, trust has been found to have an impact on the perceived ease of use and perceived usefulness of mobile services [55].
Although trust has been extensively studied in the context of technology and e-services, there is a scarcity of research specifically on chatbot services [9,13]. Trust, as an essential human perception, is crucial for both interpersonal interactions and human-computer interactions [56]. Users of technology, particularly chatbots, may be hesitant to share personal information when they have doubts about the security of the technology [57,58]. A lack of trust in chatbots can lead to consumers’ rejection, as it has a negative impact on technology acceptance and on user satisfaction [59]. Kumar et al. [60] suggested that trust can aid artificial intelligence tools, such as chatbots, in increasing customer engagement. Similarly, research by Hsiao and Chen [28] demonstrated that trust has a direct and positive effect on user satisfaction in the context of food-ordering chatbots. Eren [16] also emphasized the importance of trust, highlighting its significant impact on user satisfaction with chatbot services. Based on these studies, it can be inferred that:
Trust has a positive effect on user satisfaction with chatbots.
Additionally, studies on technology acceptance suggest that individuals who do not trust a technology are less likely to adopt it [49]. Cardona et al. [15] found that trust has a positive effect on the perceived usefulness of insurance chatbots. Therefore, it is hypothesized that:
Trust has a positive effect on the perceived usefulness of chatbots.
Previous studies in the information systems field have shown that trust has a positive influence on attitudes toward the use of new technologies, e.g., [61]. Prior research also revealed that trust has a positive impact on attitudes toward chatbots [62]. Consequently, it was inferred that:
Trust has a positive effect on attitude toward using chatbot-based services.
Previous research has shown that trust has a significant impact on intentions to reuse chatbots [63], specifically in relation to bank chatbot services [13,18] and shopping chatbots [62]. In accordance with these findings, it is hypothesized that:
Trust has a positive effect on chatbot reuse intentions.

2.2. Perceived Social Presence

Biocca et al. [64] defined social presence as the “sense of being with another” (p. 456). Oh et al. [65] argue that social presence has a vital role in the context of online interactions. Research on an online shopping website by Hassanein and Head [66] found that perceived social presence has a positive impact on trust, enjoyment, and perceived usefulness, leading to the intention to make a purchase. Go and Sundar [67] argued that identity cues are an important factor in developing expectations about a chatbot’s performance in interactions, which not only impacts an individual’s psychological response, but it also influences attitudes and behaviors toward conversational agents like chatbots. Kilteni et al. [68] and Moon et al. [69] explain that social presence occurs when chatbot users feel that they are communicating with humans, hence dealing with anthropomorphic virtual agents [34]. Prior studies have revealed that social presence in chatbot usage is significantly related to trust [18,34,70]. As such, the following hypothesis is formulated:
Perceived social presence has a positive effect on trust.
Furthermore, Ng et al. [18] confirmed that perceived social presence has a positive impact on usage intentions. Thus, it is inferred that:
Perceived social presence has a positive effect on chatbot reuse intentions.

2.3. Satisfaction

User satisfaction can be defined as the evaluation of a product or service by the consumer after purchase and consumption or use [71,72]. According to the expectation confirmation model, satisfaction occurs when consumers’ expectations are met [73]. In the context of information systems, the expectation confirmation model (ECM) states that user satisfaction determines post-acceptance behavior, particularly the continued usage of technologies [44]. This approach has been adopted in studies aimed at evaluating consumer experiences [71,74,75,76], including in online activities [72]. Furthermore, the literature generally shows that there is a positive relationship between user satisfaction and chatbot reuse intention [45]. Chatbots can provide comprehensive and accurate information through digital tools, which reduces uncertainty and enhances customer satisfaction [77,78]. Their ability to communicate according to the profiles of the target audience contributes to user satisfaction with the service provided [20,79]. Studies have shown that the provision of credible, accurate, customized, up-to-date, and reliable information with chatbots can have a positive impact on customer satisfaction [45,77,79,80,81,82]. Research has documented that user satisfaction will impact users’ intention to reuse chatbot services [6,13,28,45,83,84,85]. Therefore, it is assumed that:
User satisfaction has a positive effect on chatbot reuse intention.

2.4. Perceived Usefulness

Davis et al. [86] explain that perceived usefulness is “the prospective user’s subjective probability that using a specific application system will increase his or her job performance within an organizational context” (p. 985). This suggests that perceived usefulness is a determinant of attitude toward and intention of using chatbots. For example, perceived usefulness has been considered in various studies to explain attitudes and behavioral intentions related to technology services such as e-learning [87] and hospital information services [88].
Several studies have shown that information systems can provide content to improve perceived usefulness [6,89]. The researchers have argued that perceived usefulness is a strong determinant of technology usage [21,90]. Empirical studies have also shown that perceived usefulness has a significant influence on attitudes toward using chats and chatbots [10,90,91]. Therefore, it is expected that:
Perceived usefulness has a positive effect on attitude toward using chatbot-based services.
Additionally, studies in the context of using a chatbot in hospitality and tourism [4], retail [92], e-commerce websites [90], Amazon’s Mechanical Turk [45], and bank services [13] have proven that perceived usefulness significantly influences the reuse of chatbot services. In line with these findings, it is assumed that:
Perceived usefulness has a positive effect on chatbot reuse intention.

2.5. Perceived Ease of Use

The perceived ease of use, as defined by Davis [39], is “the degree to which the prospective user expects the target system to be free of effort” (p. 985). The literature stresses that chatbots are easy and helpful for customers to access information about products or services [4] and to foster customer responses [21], namely due to perceived ease of use and perceived usefulness. Kasilingam [62] argued that consumers are most likely will use chatbots when technical infrastructure such as “internet-enabled phone plans”, “user-friendly interfaces”, and “messenger apps” are already available for chatbots. As technology users have predetermined assumptions about the ease or difficulty of using technology [39], perceived ease of use is expected to have a positive impact on attitudes toward using technology [39,41]. That has been demonstrated in various contexts, such as information technology [93], e-wallet services [94], and mobile learning technologies [95]. Customers are willing to adopt a technology that they can easily use and understand. The perceived ease of use is also expected to have an impact on perceived usefulness, which is demonstrated in several contexts, such as e-learning [87] and the health industry [96]. Therefore, it is assumed that:
Perceived ease of use has a positive effect on perceived usefulness.
Additionally, the literature shows the impact of perceived ease of use on attitudes toward using chatbots [10,62]. In line with this, it is expected that:
Perceived ease of use has a positive effect on attitude toward using chatbot-based services.
The literature also suggests that perceived ease of use affects the intention to use chatbots [4,45,62,92]. Therefore, we hypothesize that:
Perceived ease of use has a positive effect on chatbot reuse intention.

2.6. Subjective Norm

Subjective norm refers to the perception that a person has of what the people who are most important to them would expect them to do [38,41]. Based on the theory of reasoned action and the theory of planned behavior, previous studies have suggested that social influences play a role in people’s behavior, including technology adoption [40,41]. Research conducted by Huang and Kao [97] on the usage of chatbot services during a pandemic indicated that subjective norm has a positive effect on perceived usefulness in hedonic service situations. Previous research revealed that individuals are more likely to decide to select and use new technology when their social circles, such as friends, family, relatives, and guardians, approve or accept the technology [31,98]. There are some studies that have demonstrated the impact of subjective norms on perceived usefulness in contexts such as e-learning [87] and health applications usage [96]. In line with these contributions, it is assumed that:
Subjective norm has a positive effect on the perceived usefulness of chatbots.
Additionally, several studies have highlighted the positive impact of subjective norms on behavioral intentions, specifically regarding technology-related products and services, including e-banking [99], mobile banking [100], mobile applications [101,102], and online hotel booking [103]. In the context of chatbots, Chang et al. [11] observed that subjective norm plays a significant role in the use of medical chatbots. Similarly, Patil and Kulkarni [19] and Aslam et al. [31] found a positive and significant impact of subjective norms on the intention to use chatbots. Following these studies, it is considered that:
Subjective norm has a positive effect on chatbot reuse intention.

2.7. Attitude toward Using Chatbots

According to Fishbein and Ajzen [38], attitude toward a behavior is the person’s feelings (positive and negative) in relation to the behavior they intend to achieve, and it is particularly important because those beliefs and evaluations determine behavioral intention [104]. Attitude has been applied by several theories and models, such as the theory of reasoned action and the technology acceptance model, to explain behavioral intention.
In general, attitude towards the use of a certain technology is expected to determine the intention to use it [86]. Numerous researchers have empirically supported that attitude is positively associated with the intention to use technology systems such as health technology [88], e-government [105,106], travel websites [107], and e-banking [108]. In addition, other authors, e.g., [1,62], revealed that the attitude towards using chatbot-based services significantly influenced the intention to use and that dissatisfying chatbot interactions reduced intention to use chatbots. In line with these findings, the following research hypothesis was defined:
Attitude towards using chatbot-based services has a positive effect on chatbot reuse intention.
The literature review has enabled the definition of 15 research hypotheses, leading to the conceptual model presented in Figure 1.

3. Method

In order to test the research hypotheses, a quantitative approach was adopted through an online survey conducted in May and June 2022.

3.1. Materials and Measurements

The questionnaire used in this study consisted of scales that have been previously developed and validated by other researchers to measure reuse intention [18], attitude toward using chatbots [109], subjective norm [95,110], perceived usefulness [41,55,95], perceived ease of use [41,95], trust [18], perceived social presence [34], and user satisfaction [77]. The resulting 40 measurement items were adapted to the specific context of the current study. Demographic data (i.e., gender, age, and education) were also collected for sample characterization. The measurement items and their corresponding mean and standard deviation values are presented in Table 1.
Prior to the main data collection, a pilot test was conducted with 30 customers who possessed characteristics similar to the study population. This step was crucial in ensuring that the research instrument was appropriate, that the instructions, questions, and items were clear and easy to understand, and that the questionnaire was free of typographical errors. Additionally, this pilot test allowed for the identification of any issues or weaknesses within the questionnaire. Based on the feedback received from the participants in the pilot study, minor adjustments were made to the wording and phrasing of the instructions. The participants of the pilot study confirmed the clarity of the items and estimated that the time required to complete the survey was less than 5 min.

3.2. Population and Sample

The study population consisted of Portuguese consumers who had prior experience with chatbot-based services. A convenience sampling technique was utilized, as participants were primarily recruited through the researchers’ personal connections via email and social networking sites. In an effort to include a wider range of demographics, the researchers also approached random individuals in public spaces in two Portuguese cities. The research instrument provided information on its objectives and the ethical principles followed by the research, ensuring that participation was confidential, anonymous, and voluntary. The objectives of the study were explained, and participants were then asked to provide their informed consent for data collection, analysis, and dissemination procedures.
In order to ensure that participants had prior experience interacting with chatbots, a screening question was included at the beginning of the survey asking, “Have you ever interacted/used a chatbot before?” The instructions were clear that the study covered both text and voice-based customer service chatbots, such as making appointments for doctors or car assistance.
For the main data collection, a total of 258 responses were obtained. However, due to unengaged responses (i.e., selecting the same answer for all items of the questionnaire), 57 responses were removed, and the final sample size of 201 respondents was considered for analysis. Of the respondents, 33.8 percent (n = 68) were male, 62.2 percent (n = 131) were female, and 2 identified as another gender. In terms of age, 65.7 percent (n = 132) of the respondents were between 18–25 years old, 16.9 percent (n = 34) were between 26–35 years old, 5.5 percent (n = 11) were between 36–45 years old, 7.0 percent (n = 14) were between 46–55 years old, and 5 percent (n = 10) were over 55 years old. In terms of education, 51.7 percent (n = 104) of the respondents had a bachelor’s degree, 25.4 percent (n = 51) had completed secondary education, and 22.9 percent (n = 42) had completed post-graduate education.

4. Results

This section presents the findings of the study. The analysis was conducted using Smart PLS structural equation modeling.

4.1. Measurement Model Evaluation

As suggested by Hair et al. [111], measurement model testing involves analyzing: outer loadings, item/indicator reliability, construct reliability, and convergent and discriminant validity. The results of the validity and reliability of the measurement model are presented in Table 2.
To evaluate the accuracy of the measures, first, the assessment of the measurement model was conducted. The individual reliability of each item was evaluated by factor loadings. As indicated in Table 2, only items carrying a loading of 0.50 and above were included in the analysis [112], which indicates that the shared variance between the item and its construct is greater than the error variance. At this stage, items (ATT5, PUS6, and SNO5) with factor loading less than 0.5 were eliminated. Furthermore, the measurement model shows good reliability because Cronbach’s Alpha for all constructs was greater than 0.7 [113], and the estimates of composite reliability exceed 0.7 [114]. Evidence of convergent validity was provided by the values for Average Variance Extracted (AVE). The results show that the AVE of each construct in the model was more than 0.50. The constructs should also show high discriminant validity [111,114,115]. According to Fornell and Larcker [115], the square root of the AVE of each construct must be greater than the inter-construct correlations of the model (see Table 2).

4.2. Assessment of Structural Model and Hypotheses Testing

Following the structural model evaluation using Smart PLS (v.3.3.9), the detailed list of the path coefficients with their respective t-values, R2, Q2, VIF, and F2, are presented in Table 3 and Table 4. In order to assess the statistical significance of the path coefficient (Figure 2), this study implemented a boot-strapping resampling procedure using 10,000 subsamples [116] and the default settings (i.e., parallel processing, no sign changes).
As shown in Table 3, the R Square for the endogenous constructs ranges from 0.138 to 0.759. Cohen [117] suggested that the R2 values for endogenous latent variables are assessed as follows: 0.26 (substantial), 0.13 (moderate), and 0.02 (weak). Thus, the R2 for endogenous latent variables is moderate and substantial. As suggested by Stone [118] and Geisser [119], this study used a cross-validated redundancy criterion to examine the predictive relevance (Q2) of the exogenous latent variables on the reflective endogenous latent variable. To assess Q2, blindfolding using SmartPLS 3 was performed. As indicated in Table 3, all values of Q2 are greater than zero, which indicates that the model is relevant to predicting that factor [111].
Hair et al. [111] suggested that VIF values should be less than 5.0. As revealed in Table 4, all exogenous constructs have VIF values less than 5.0, thus indicating no multicollinearity issue in the structural model.
As shown in Table 4, the results revealed a significant impact of Trust on Satisfaction (H1: β = 0.563, t = 6.427, p < 0.01), Perceived Usefulness (H2: β = 0.204, t = 2.501, p < 0.05), and Attitude toward using chatbot (H3: β = 0.240, t = 2.300, p < 0.05). However, Trust did not have a significant impact on Reuse Intention (H4: β = 0.080, t = 1.136, n.s.). Thus, except H4 (not supported) the H1, H2, and H3 are supported.
H5 and H6 predict that Perceived Social Presence positively affects Trust and Reuse Intention. As indicated in Table 4, the positive effect of Perceived Social Presence on Trust (H5: β = 0.372, t = 5.112, p < 0.01) is supported. However, the impact of Perceived Social Presence on Reuse Intention (H6: β = 0.010, t = 0.180, n.s.) is not supported. H7 predicts that User Satisfaction positively affects Reuse Intention. As expected, a significant positive relationship between the two variables was confirmed (H2: β = 0.474, t = 4.326, p < 0.01).
Results also confirmed that Perceived Usefulness would be positively related to Attitude toward using chatbots (H8: β = 0.326, t = 4.149, p < 0.01) and Reuse Intention (H9: β = 0.283, t = 3.387, p < 0.01). As predicted, H8 and H9 were supported.
H10, H11, and H12 propose that Perceived Ease of Use has a positive impact on Perceived Usefulness (H10), Attitude toward using chatbot-based services (H11), and Reuse Intention (H12). As predicted, a significant positive relationship between Perceived Ease of Use and Perceived Usefulness (H10: β = 0.288, t = 3.014, p < 0.01) and Attitude toward using (H11: β = 0.304, t = 3.848, p < 0.01) are demonstrated. The results showed that the impact of Perceived Ease of Use on reuse intention (H12: β = 0.049, t = 0.656, n.s) is not supported. The study proposed that Subjective norm has a positive impact on Perceived Usefulness (H13) and reuse intention (H14). As predicted, the effect for both paths (H13: β = 0.420, t = 6.184, p < 0.01 and H14: β = 0.167, t = 2.148, p < 0.01) are positive and significant. H15 hypothesizes that Attitude toward using positively affects reuse intention. As shown in Table 4, the path estimate is negative between Attitude toward using and reuse intention, although it is non-significant (H15: β = −0.022, t = 0.293, n.s.). Thus, H15 is not supported.
Based on guidelines suggested by Cohen [117], the f2 values of 0.35, 0.15, or 0.02 for endogenous latent variables in the inner path model are considered to be large, medium, and small sizes, respectively. The results indicated that the effect of Trust on Satisfaction (H1) is large, the impact of and User satisfaction on Reuse Intention (H7) and of Subjective norm on Perceived Usefulness (H13) are considered medium. Moreover, the impact of Trust on Perceived Usefulness (H2), Trust on Attitude toward using Chatbots (H3), Perceived Usefulness on Attitude toward using Chatbots (H8), Perceived Usefulness on Reuse Intention (H9), Perceived Ease of Use on Perceived Usefulness (H10), Perceived Ease of Use on Attitude toward using Chatbots (H11) and Subjective norm on Reuse Intention (H14) are considered small.

5. Discussion

The present study aimed to explore chatbot reuse intentions by identifying and analyzing the impacts of its main determinants. The study’s findings provide relevant contributions to both researchers. Overall, the study demonstrates that the technology acceptance model [39,41] provides an appropriate framework for analyzing chatbot-based services’ reuse intentions. This section discusses the key findings of this article.
The study confirmed that trust has a significant impact on perceived usefulness, consistent with the findings of Cardona et al. [15], who argued that trust positively impacts perceived usefulness. Additionally, the impact of trust on user satisfaction was also confirmed, as reported by Hsiao and Chen [28] and Eren [16], who found that users feel more satisfied when they perceive trust in chatbot usage. Furthermore, the findings support that trust has an impact on attitudes toward using chatbots, as previously reported by Kasilingam [62]. However, the study found that the relationship between trust and reuse intentions was not significant, which contradicts the findings of some existing studies on chatbots [13,18,62]. Previous research on AI-based systems has also concluded that trust does not appear to have a significant impact on users’ behavioral intentions toward chatbots [120]. One possible explanation for this could be that chatbots are still in the early stages of development, and not all users may feel comfortable interacting with them [120]. Additionally, as argued by Seitz et al. [12], users may also have concerns about the software’s performance, reliability, and accuracy. Furthermore, it has been reported that 42% of customers avoid using chatbots for complex inquiries, and 15% of customers express a lack of confidence in communicating with companies through chatbots [27]. Similar findings can be found in existing studies on chatbots, which have shown that social presence in chatbot usage is significantly related to trust [18,34,70]. Based on the results, the impact of perceived social presence on reuse intention is not supported. The results of this study showed that satisfaction stands out as a main determinant of chatbot reuse intention and should also be considered to further understand consumer behavior on this matter. Past research has also demonstrated that user satisfaction impacts users’ intention to reuse chatbots [6,13,28,83].
The results of this study indicate that perceived usefulness has a positive correlation with attitude towards using chatbot-based services. This aligns with previous research by Elmorshidy et al. [90] and Gümüş and Çark [91], which also found a direct relationship between these two variables. Furthermore, the findings of this study support the notion that perceived usefulness has a positive impact on reuse intention towards chatbot services, which is consistent with the findings of various studies, e.g., [4,13,45,90,91,92]. Additionally, the results confirm the positive relationship between perceived ease of use and attitude toward using chatbots [62] and perceived usefulness [87]. The study also found that subjective norm has a positive impact on perceived usefulness [97] and reuse intention [11,19,31], which is consistent with previous research.
One intriguing finding of this study is that attitude towards using chatbot-based services did not have a significant impact on reuse intention, which contradicts previous research [1,62] and one basic assumption in consumer behavior assumed by theoretical frameworks such as TRA and TAM. This may suggest that among customers who have prior experience with chatbot-based services, their different attitudes (positive or negative) are compensated by other aspects of the experience, particularly the usefulness of the task and satisfaction with the service provided by the chatbot. This presents an interesting starting point for further research to investigate and understand the relationship between consumers’ attitudes and intentions in greater depth.

6. Implications

This study makes several contributions to the literature on chatbot usage by examining the impact of satisfaction, trust, and social presence on reuse intentions. Prior studies have primarily focused on understanding chatbot acceptance using models such as the technology acceptance model (TAM), the theory of planned behavior (TPB), and the unified theory of acceptance and use of technology (UTAUT) [1,4,62,92]. However, these studies have not considered the role of satisfaction in reuse intentions. This study addresses this gap by combining it with TAM and TRA, offering new perspectives on the phenomenon. The findings confirm the impact of satisfaction on reuse intentions, thereby enriching the literature on chatbot usage. Additionally, the study adapts several constructs to a chatbot-based service context, providing good validation, as seen in Section 4.1.
The findings of this study have important implications for companies that use or are considering using chatbots to provide services to their customers. Chatbots are becoming increasingly popular in various fields and applications, such as customer service, personal assistance, education, e-commerce, healthcare, and entertainment [121]. They can be implemented in various forms, such as text, voice, and virtual reality, and can be integrated into websites, mobile apps, messaging platforms, and other digital channels to enhance user experience. Companies should invest in creating efficient chatbot-based services with minimal flaws and in demonstrating the usefulness of the service to customers. This will help to increase satisfaction and perceived usefulness, which in turn will increase the reuse intentions. Additionally, companies should act on communities and particularly on social influencers to increase chatbot advocacy, considering the importance of subjective norms to explain chatbot usage. It is also worth noting that as efforts to make the experience hedonic may not have a significant impact on the intention to use, organizations should primarily focus on improving the usefulness of the chatbot-based service for optimal results.
In conclusion, based on the findings of our research, it is recommended that companies prioritize the three aspects in their implementation of chatbots: (i) understanding the usefulness of the chatbot from the customer’s perspective to provide value and relevant benefits to customers (e.g., time-saving), which make viable and usable options for customers; (ii) the improvement of chatbot performance and efficacy to enhance user satisfaction; and (iii) to motivate early adopters who could act as social influencers. This is because subjective norm plays a significant role in shaping attitudes and influencing reuse behavior towards chatbots.

7. Limitations and Suggestions for Future Research

One of the main limitations of this investigation pertains to the scope of the selected sample, which does not permit the generalization of findings to other populations. Therefore, it is recommended that future studies expand the sample to other countries in order to further validate the findings. Additionally, it would be beneficial for future studies to conduct investigations in real-world settings, specifically by administering satisfaction surveys for chatbot-based services. Examining different types of services, such as service appointments and information requests, or comparing different sectors, such as healthcare or car assistance services, can also provide valuable insights into this under-explored topic. So, while this study did not focus on any particular sector, future research should aim to explore and compare the intentions for the use or reuse of chatbots across different sectors. Such investigations would provide a more comprehensive understanding of the potential applications of chatbot technology in various settings, leading to the more effective and targeted deployment of this technology. This study found that perceived social presence and trust do not have a direct impact on customers’ reuse intention. Future studies should further examine the direct and indirect effects of trust and perceived social presence on chatbot usage intentions.
Finally, the impact of confirmation of expectation from prior use on satisfaction and reuse intention was not considered by this study and should be addressed by future research. As explained by Bhattacherjee [44], the expectation confirmation model (ECM) proposes that reuse intention is explained by satisfaction and that both these variables are affected by confirmation of expectation from prior use and by perceived usefulness. Being a particularly relevant model to explain post-usage behavior [45], as it is the case of reuse intentions, it is recommended that future research further consider the expectation confirmation model and integrate the construct “confirmation of expectation from prior use” to further explain reuse intentions.

Author Contributions

Conceptualization, F.A.S. and B.B.; methodology, F.A.S., A.S.S. and B.B.; data analysis, A.S.S.; writing—original draft preparation, F.A.S. and A.S.S.; writing—review and editing, B.B. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. Ethical review and approval were waived for this study due to the nature of the data collected and the fact that the topic and the research objectives were not considered sensitive or pertained to any risks to participants. Ethical principles generally applied to social research were applied: informed consent, confidentiality, anonymity, and voluntariness.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study, as survey participants manifested their agreement to participate in the study and the inclusion of their responses in the analysis and publication of results.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.


The authors would like to express their gratitude to the Editors and Reviewers for their insightful comments and suggestions, which greatly contributed to the improvement of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Kwangsawad, A.; Jattamart, A. Overcoming customer innovation resistance to the sustainable adoption of chatbot services: A community-enterprise perspective in Thailand. J. Innov. Knowl. 2022, 7, 100211. [Google Scholar] [CrossRef]
  2. Araújo, T.; Casais, B. Customer Acceptance of Shopping-Assistant Chatbots. In Marketing and Smart Technologies; Rocha, Á., Reis, J.L., Peter, M.K., Bogdanović, Z., Eds.; Springer: Singapore, 2020; Volume 167, pp. 278–287. [Google Scholar] [CrossRef]
  3. Calvaresi, D.; Ibrahim, A.; Calbimonte, J.-P.; Fragniere, E.; Schegg, R.; Schumacher, M.I. Leveraging inter-tourists interactions via chatbots to bridge academia, tourism industries and future societies. J. Tour. Futur. 2021, 1–27. [Google Scholar] [CrossRef]
  4. Pillai, R.; Sivathanu, B. Adoption of AI-based chatbots for hospitality and tourism. Int. J. Contemp. Hosp. Manag. 2020, 32, 3199–3226. [Google Scholar] [CrossRef]
  5. Ceccarini, C.; Prandi, C. Tourism for all: A mobile application to assist visually impaired users in enjoying tourist services. In Proceedings of the 16th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 11–14 January 2019; pp. 1–6. [Google Scholar] [CrossRef]
  6. Huang, D.-H.; Chueh, H.-E. Chatbot usage intention analysis: Veterinary consultation. J. Innov. Knowl. 2021, 6, 135–144. [Google Scholar] [CrossRef]
  7. Sheehan, B.; Jin, H.S.; Gottlieb, U. Customer service chatbots: Anthropomorphism and adoption. J. Bus. Res. 2020, 115, 14–24. [Google Scholar] [CrossRef]
  8. Sands, S.; Ferraro, C.; Campbell, C.; Tsao, H.-Y. Managing the human–chatbot divide: How service scripts influence service experience. J. Serv. Manag. 2021, 32, 246–264. [Google Scholar] [CrossRef]
  9. Seo, K.; Lee, J. The Emergence of Service Robots at Restaurants: Integrating Trust, Perceived Risk, and Satisfaction. Sustainability 2021, 13, 4431. [Google Scholar] [CrossRef]
  10. Murtarelli, G.; Collina, C.; Romenti, S. “Hi! How can I help you today?”: Investigating the quality of chatbots–millennials relationship within the fashion industry. TQM J. 2022. ahead-of-print. [Google Scholar] [CrossRef]
  11. Chang, I.-C.; Shih, Y.-S.; Kuo, K.-M. Why would you use medical chatbots? interview and survey. Int. J. Med. Inform. 2022, 165, 104827. [Google Scholar] [CrossRef]
  12. Seitz, L.; Bekmeier-Feuerhahn, S.; Gohil, K. Can we trust a chatbot like a physician? A qualitative study on understanding the emergence of trust toward diagnostic chatbots. Int. J. Hum.-Comput. Stud. 2022, 165, 102848. [Google Scholar] [CrossRef]
  13. Nguyen, D.; Chiu, Y.-T.; Le, H. Determinants of Continuance Intention towards Banks’ Chatbot Services in Vietnam: A Necessity for Sustainable Development. Sustainability 2021, 13, 7625. [Google Scholar] [CrossRef]
  14. Huang, S.Y.; Lee, C.-J. Predicting continuance intention to fintech chatbot. Comput. Hum. Behav. 2022, 129, 107027. [Google Scholar] [CrossRef]
  15. Cardona, D.R.; Janssen, A.; Guhr, N.; Breitner, M.H.; Milde, J. A Matter of Trust? Examination of Chatbot Usage in Insurance Business. In Proceedings of the Annual Hawaii International Conference on System Sciences, Kauai, HA, USA, 5 January 2021; pp. 556–565. [Google Scholar] [CrossRef]
  16. Eren, B.A. Determinants of customer satisfaction in chatbot use: Evidence from a banking application in Turkey. Int. J. Bank Mark. 2021, 39, 294–311. [Google Scholar] [CrossRef]
  17. Sarbabidya, S.; Saha, T. Role of chatbot in customer service: A study from the perspectives of the banking industry of Bangladesh. Int. Rev. Bus. Res. Pap. 2020, 16, 231–248. [Google Scholar]
  18. Ng, M.; Coopamootoo, K.P.; Toreini, E.; Aitken, M.; Elliot, K.; van Moorsel, A. Simulating the effects of social presence on trust, privacy concerns & usage intentions in automated bots for finance. In Proceedings of the 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS & PW), Genoa, Italy, 7–11 September 2020; pp. 190–199. [Google Scholar]
  19. Patil, D.; Kulkarni, D.S. Artificial Intelligence in Financial Services: Customer Chatbot Advisor Adoption. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 4296–4303. [Google Scholar] [CrossRef]
  20. Ferreira, M.; Barbosa, B. A Review on Chatbot Personality and Its Expected Effects on Users. In Trends, Applications, and Challenges of Chatbot Technology; Kuhail, M.A., Shawar, B.A., Hammad, R., Eds.; IGI Global: Hershey, PA, USA, 2023; pp. 222–243. [Google Scholar]
  21. Selamat, M.A.; Windasari, N.A. Chatbot for SMEs: Integrating customer and business owner perspectives. Technol. Soc. 2021, 66, 101685. [Google Scholar] [CrossRef]
  22. Clark, S. 5 Ways Chatbots Improve Employee Experience. Available online: (accessed on 12 January 2023).
  23. Fokina, M. The Future of Chatbots: 80+ Chatbot Statistics for 2023. Available online: (accessed on 21 January 2023).
  24. Williams, R. Study: Chatbots to Drive $112B in Retail Sales by 2023. Available online: (accessed on 12 January 2023).
  25. Følstad, A.; Nordheim, C.B.; Bjørkli, C.A. What Makes Users Trust a Chatbot for Customer Service? An Exploratory Interview Study. In Proceedings of the Internet Science; Bodrunova, S.S., Ed.; Springer: Cham, Switzerland, 2018; pp. 194–208. [Google Scholar] [CrossRef] [Green Version]
  26. van der Goot, M.J.; Hafkamp, L.; Dankfort, Z. Customer service chatbots: A qualitative interview study into the communication journey of customers. In Chatbot Research and Design; Følstad, A., Araujo, T., Papadopoulos, S., Law, E.L.C., Luger, E., Goodwin, M., Brandtzaeg, P.B., Eds.; Springer: Cham, Switzerland, 2021; pp. 190–204. [Google Scholar]
  27. Sladden, C.O. Chatbots’ Failure to Satisfy Customers Is Harming Businesses, Says Study. Available online: (accessed on 12 January 2023).
  28. Hsiao, K.-L.; Chen, C.-C. What drives continuance intention to use a food-ordering chatbot? An examination of trust and satisfaction. Libr. Hi Tech. 2022, 40, 929–946. [Google Scholar] [CrossRef]
  29. Huang, M.-H.; Rust, R.T. Artificial Intelligence in Service. J. Serv. Res. 2018, 21, 155–172. [Google Scholar] [CrossRef]
  30. Wirtz, J.; Patterson, P.G.; Kunz, W.H.; Gruber, T.; Lu, V.N.; Paluch, S.; Martins, A. Brave new world: Service robots in the frontline. J. Serv. Manag. 2018, 29, 907–931. [Google Scholar] [CrossRef] [Green Version]
  31. Aslam, W.; Siddiqui, D.A.; Arif, I.; Farhat, K. Chatbots in the frontline: Drivers of acceptance. Kybernetes, 2022; ahead-of-print. [Google Scholar] [CrossRef]
  32. Rahane, W.; Patil, S.; Dhondkar, K.; Mate, T. Artificial intelligence based solarbot. In Proceedings of the 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), Coimbatore, India, 20–21 April 2018; pp. 601–605. [Google Scholar]
  33. Baier, D.; Rese, A.; Röglinger, M.; Baier, D.; Rese, A.; Röglinger, M. Conversational User Interfaces for Online Shops? A Categorization of Use Cases. In Proceedings of the 39th International Conference on Information Systems (ICIS), San Francisco, CA, USA, 13–16 December 2018. [Google Scholar]
  34. Toader, D.-C.; Boca, G.; Toader, R.; Măcelaru, M.; Toader, C.; Ighian, D.; Rădulescu, A.T. The Effect of Social Presence and Chatbot Errors on Trust. Sustainability 2019, 12, 256. [Google Scholar] [CrossRef] [Green Version]
  35. Lai, P.C. The literature review of technology adoption models and theories for the novelty technology. J. Inf. Syst. Technol. Manag. 2017, 14, 21–38. [Google Scholar] [CrossRef] [Green Version]
  36. Beldad, A.D.; Hegner, S.M. Expanding the Technology Acceptance Model with the Inclusion of Trust, Social Influence, and Health Valuation to Determine the Predictors of German Users’ Willingness to Continue Using a Fitness App: A Structural Equation Modeling Approach. Int. J. Hum.–Comput. Interact. 2018, 34, 882–893. [Google Scholar] [CrossRef] [Green Version]
  37. Lee, M.-C. Factors influencing the adoption of internet banking: An integration of TAM and TPB with perceived risk and perceived benefit. Electron. Commer. Res. Appl. 2009, 8, 130–141. [Google Scholar] [CrossRef]
  38. Fishbein, M.; Ajzen, I. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research; Addison-Wesley: Reading, MA, USA, 1975. [Google Scholar]
  39. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef] [Green Version]
  40. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef] [Green Version]
  41. Venkatesh, V.; Davis, F.D. A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Manag. Sci. 2000, 46, 186–204. [Google Scholar] [CrossRef] [Green Version]
  42. Pikkarainen, T.; Pikkarainen, K.; Karjaluoto, H.; Pahnila, S. Consumer acceptance of online banking: An extension of the technology acceptance model. Internet Res. 2004, 14, 224–235. [Google Scholar] [CrossRef] [Green Version]
  43. Tsui, H.-D. Trust, Perceived Useful, Attitude and Continuance Intention to Use E-Government Service: An Empirical Study in Taiwan. IEICE Trans. Inf. Syst. 2019, 102, 2524–2534. [Google Scholar] [CrossRef] [Green Version]
  44. Bhattacherjee, A. Understanding Information Systems Continuance: An Expectation-Confirmation Model. MIS Q. 2001, 25, 351–370. [Google Scholar] [CrossRef]
  45. Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, S.M.C. I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telemat. Inform. 2020, 54, 101473. [Google Scholar] [CrossRef]
  46. Jiang, Y.; Yang, X.; Zheng, T. Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots. Comput. Hum. Behav. 2023, 138, 107485. [Google Scholar] [CrossRef]
  47. Amoako-Gyampah, K.; Salam, A. An extension of the technology acceptance model in an ERP implementation environment. Inf. Manag. 2004, 41, 731–745. [Google Scholar] [CrossRef]
  48. Shahzad, F.; Xiu, G.; Khan, I.; Wang, J. m-Government Security Response System: Predicting Citizens’ Adoption Behavior. Int. J. Hum.–Comput. Interact. 2019, 35, 899–915. [Google Scholar] [CrossRef]
  49. Dhagarra, D.; Goswami, M.; Kumar, G. Impact of Trust and Privacy Concerns on Technology Acceptance in Healthcare: An Indian Perspective. Int. J. Med. Inform. 2020, 141, 104164. [Google Scholar] [CrossRef] [PubMed]
  50. Aljazzaf, Z.M.; Perry, M.; Capretz, M.A.M. Online trust: Definition and principles. In Proceedings of the ICCGI’10: Proceedings of the 2010 Fifth International Multi-Conference on Computing in the Global Information Technology, Valencia, Spain, 20–25 September 2010; pp. 163–168. [Google Scholar]
  51. Corritore, C.L.; Kracher, B.; Wiedenbeck, S. On-line trust: Concepts, evolving themes, a model. Int. J. Hum.-Comput. Stud. 2003, 58, 737–758. [Google Scholar] [CrossRef]
  52. Bhattacherjee, A. Individual Trust in Online Firms: Scale Development and Initial Test. J. Manag. Inf. Syst. 2002, 19, 211–241. [Google Scholar] [CrossRef]
  53. Gefen, D.; Straub, D. Managing User Trust in B2C e-Services. e-Serv. J. 2003, 2, 7. [Google Scholar] [CrossRef]
  54. Holsapple, C.W.; Sasidharan, S. The dynamics of trust in B2C e-commerce: A research model and agenda. Inf. Syst. e-Bus. Manag. 2005, 3, 377–403. [Google Scholar] [CrossRef]
  55. Zarmpou, T.; Saprikis, V.; Markos, A.; Vlachopoulou, M. Modeling users’ acceptance of mobile services. Electron. Commer. Res. 2012, 12, 225–248. [Google Scholar] [CrossRef]
  56. Agarwal, S. Trust or No Trust in Chatbots: A Dilemma of Millennial. In Cognitive Computing for Human-Robot Interaction; Elsevier: Cambridge, MA, USA, 2021; pp. 103–119. [Google Scholar] [CrossRef]
  57. Chung, M.; Joung, H.; Ko, E. The role of luxury brands conversational agents comparison between facetoface and chatbot. Glob. Fash. Manag. Conf. 2017, 2017, 540. [Google Scholar] [CrossRef]
  58. Przegalinska, A.; Ciechanowski, L.; Stroz, A.; Gloor, P.; Mazurek, G. In bot we trust: A new methodology of chatbot performance measures. Bus. Horiz. 2019, 62, 785–797. [Google Scholar] [CrossRef]
  59. Hoff, K.A.; Bashir, M. Trust in Automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 2014, 57, 407–434. [Google Scholar] [CrossRef]
  60. Kumar, V.; Rajan, B.; Venkatesan, R.; Lecinski, J. Understanding the Role of Artificial Intelligence in Personalized Engagement Marketing. Calif. Manag. Rev. 2019, 61, 135–155. [Google Scholar] [CrossRef]
  61. Belanche, D.; Casaló, L.V.; Flavián, C. Integrating trust and personal values into the Technology Acceptance Model: The case of e-government services adoption. Cuad. Econ. Dir. Empresa 2012, 15, 192–204. [Google Scholar] [CrossRef] [Green Version]
  62. Kasilingam, D.L. Understanding the attitude and intention to use smartphone chatbots for shopping. Technol. Soc. 2020, 62, 101280. [Google Scholar] [CrossRef]
  63. Rajaobelina, L.; Tep, S.P.; Arcand, M.; Ricard, L. Creepiness: Its antecedents and impact on loyalty when interacting with a chatbot. Psychol. Mark. 2021, 38, 2339–2356. [Google Scholar] [CrossRef]
  64. Biocca, F.; Harms, C.; Burgoon, J.K. Toward a More Robust Theory and Measure of Social Presence: Review and Suggested Criteria. Presence Teleoperators Virtual Environ. 2003, 12, 456–480. [Google Scholar] [CrossRef]
  65. Oh, C.S.; Bailenson, J.N.; Welch, G.F. A Systematic Review of Social Presence: Definition, Antecedents, and Implications. Front. Robot. AI 2018, 5, 114. [Google Scholar] [CrossRef] [Green Version]
  66. Hassanein, K.; Head, M. Manipulating perceived social presence through the web interface and its impact on attitude towards online shopping. Int. J. Hum.-Comput. Stud. 2007, 65, 689–708. [Google Scholar] [CrossRef]
  67. Go, E.; Sundar, S.S. Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Comput. Hum. Behav. 2019, 97, 304–316. [Google Scholar] [CrossRef]
  68. Kilteni, K.; Groten, R.; Slater, M. The Sense of Embodiment in Virtual Reality. Presence Teleoperators Virtual Environ. 2012, 21, 373–387. [Google Scholar] [CrossRef] [Green Version]
  69. Moon, J.H.; Kim, E.; Choi, S.M.; Sung, Y. Keep the Social in Social Media: The Role of Social Interaction in Avatar-Based Virtual Shopping. J. Interact. Advert. 2013, 13, 14–26. [Google Scholar] [CrossRef]
  70. de Visser, E.J.; Monfort, S.S.; McKendrick, R.; Smith, M.A.B.; McKnight, P.E.; Krueger, F.; Parasuraman, R. Almost human: Anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol. Appl. 2016, 22, 331–349. [Google Scholar] [CrossRef] [PubMed]
  71. Türkyılmaz, A.; Özkan, C. Development of a customer satisfaction index model. Ind. Manag. Data Syst. 2007, 107, 672–687. [Google Scholar] [CrossRef] [Green Version]
  72. Kim, B. Understanding Key Antecedents of Consumer Loyalty toward Sharing-Economy Platforms: The Case of Airbnb. Sustainability 2019, 11, 5195. [Google Scholar] [CrossRef] [Green Version]
  73. Oliver, R.L. A Cognitive Model of the Antecedents and Consequences of Satisfaction Decisions. J. Mark. Res. 1980, 17, 460–469. [Google Scholar] [CrossRef]
  74. Fornell, C.; Johnson, M.D.; Anderson, E.W.; Cha, J.; Bryant, B.E. The American Customer Satisfaction Index: Nature, Purpose, and Findings. J. Mark. 1996, 60, 7. [Google Scholar] [CrossRef] [Green Version]
  75. Santini, F.D.O.; Ladeira, W.J.; Sampaio, C.H. The role of satisfaction in fashion marketing: A meta-analysis. J. Glob. Fash. Mark. 2018, 9, 305–321. [Google Scholar] [CrossRef]
  76. Suchanek, P.; Králová, M. Customer satisfaction, loyalty, knowledge and competitiveness in the food industry. Econ. Res.-Ekon. Istraživanja 2019, 32, 1237–1255. [Google Scholar] [CrossRef]
  77. Chung, M.; Ko, E.; Joung, H.; Kim, S.J. Chatbot e-service and customer satisfaction regarding luxury brands. J. Bus. Res. 2020, 117, 587–595. [Google Scholar] [CrossRef]
  78. Ben Mimoun, M.S.; Poncin, I.; Garnier, M. Animated conversational agents and e-consumer productivity: The roles of agents and individual characteristics. Inf. Manag. 2017, 54, 545–559. [Google Scholar] [CrossRef]
  79. Silva, G.R.S.; Canedo, E.D. Towards User-Centric Guidelines for Chatbot Conversational Design. Int. J. Hum.–Comput. Interact. 2022, 1–23. [Google Scholar] [CrossRef]
  80. Jiang, H.; Cheng, Y.; Yang, J.; Gao, S. AI-powered chatbot communication with customers: Dialogic interactions, satisfaction, engagement, and customer behavior. Comput. Hum. Behav. 2022, 134, 107329. [Google Scholar] [CrossRef]
  81. Nowak, K.L.; Rauh, C. Choose your “buddy icon” carefully: The influence of avatar androgyny, anthropomorphism and credibility in online interactions. Comput. Hum. Behav. 2008, 24, 1473–1493. [Google Scholar] [CrossRef]
  82. Veeramootoo, N.; Nunkoo, R.; Dwivedi, Y.K. What determines success of an e-government service? Validation of an integrative model of e-filing continuance usage. Gov. Inf. Q. 2018, 35, 161–174. [Google Scholar] [CrossRef] [Green Version]
  83. Jang, Y.-T.J.; Liu, A.Y.; Ke, W.-Y. Exploring smart retailing: Anthropomorphism in voice shopping of smart speaker. Inf. Technol. People 2022. ahead-of-print. [Google Scholar] [CrossRef]
  84. Cheng, Y.; Jiang, H. How Do AI-driven Chatbots Impact User Experience? Examining Gratifications, Perceived Privacy Risk, Satisfaction, Loyalty, and Continued Use. J. Broadcast. Electron. Media 2020, 64, 592–614. [Google Scholar] [CrossRef]
  85. Hsu, C.-L.; Lin, J.C.-C. Understanding the user satisfaction and loyalty of customer service chatbots. J. Retail. Consum. Serv. 2023, 71, 103211. [Google Scholar] [CrossRef]
  86. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef] [Green Version]
  87. Park, S.Y. An analysis of the technology acceptance model in understanding university students’ behavioral intention to use e-learning. J. Educ. Technol. Soc. 2009, 12, 150–162. [Google Scholar]
  88. Aggelidis, V.P.; Chatzoglou, P.D. Using a modified technology acceptance model in hospitals. Int. J. Med. Inform. 2009, 78, 115–126. [Google Scholar] [CrossRef]
  89. Lee, D.Y.; Lehto, M.R. User acceptance of YouTube for procedural learning: An extension of the Technology Acceptance Model. Comput. Educ. 2013, 61, 193–208. [Google Scholar] [CrossRef]
  90. Elmorshidy, A.; Mostafa, M.M.; El-Moughrabi, I.; Al-Mezen, H. Factors Influencing Live Customer Support Chat Services: An Empirical Investigation in Kuwait. J. Theor. Appl. Electron. Commer. Res. 2015, 10, 63–76. [Google Scholar] [CrossRef] [Green Version]
  91. Gümüş, N.; Çark, Ö. The Effect of Customers’ Attitudes Towards Chatbots on their Experience and Behavioural Intention in Turkey. Interdiscip. Descr. Complex Syst. 2021, 19, 420–436. [Google Scholar] [CrossRef]
  92. Rese, A.; Ganster, L.; Baier, D. Chatbots in retailers’ customer communication: How to measure their acceptance? J. Retail. Consum. Serv. 2020, 56, 102176. [Google Scholar] [CrossRef]
  93. Muchran, M.; Ahmar, A.S. Application of TAM model to the use of information technology. Int. J. Eng. Technol. 2018, 7, 37–40. [Google Scholar]
  94. Rahmayanti, P.L.D.; Widagda, I.G.N.J.A.; Yasa, N.N.K.; Giantari, I.G.A.K.; Martaleni, M.; Sakti, D.P.B.; Suwitho, S.; Anggreni, P. Integration of technology acceptance model and theory of reasoned action in pre-dicting e-wallet continuous usage intentions. Int. J. Data Netw. Sci. 2021, 5, 649–658. [Google Scholar] [CrossRef]
  95. Buabeng-Andoh, C. Predicting students’ intention to adopt mobile learning. J. Res. Innov. Teach. Learn. 2018, 11, 178–191. [Google Scholar] [CrossRef] [Green Version]
  96. Chismar, W.G.; Wiley-Patton, S. Does the extended technology acceptance model apply to physicians. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 6–9 January 2003; p. 8. [Google Scholar]
  97. Huang, Y.-S.; Kao, W.-K. Chatbot service usage during a pandemic: Fear and social distancing. Serv. Ind. J. 2021, 41, 964–984. [Google Scholar] [CrossRef]
  98. Arif, I.; Aslam, W.; Ali, M. Students’ dependence on smartphones and its effect on purchasing behavior. South Asian J. Glob. Bus. Res. 2016, 5, 285–302. [Google Scholar] [CrossRef]
  99. Rahi, S.; Ghani, M.A.; Alnaser, F.M.; Ngah, A.H. Investigating the role of unified theory of acceptance and use of technology (UTAUT) in internet banking adoption context. Manag. Sci. Lett. 2018, 8, 173–186. [Google Scholar] [CrossRef]
  100. Alalwan, A.A.; Dwivedi, Y.K.; Rana, N.P. Factors influencing adoption of mobile banking by Jordanian bank customers: Extending UTAUT2 with trust. Int. J. Inf. Manag. 2017, 37, 99–110. [Google Scholar] [CrossRef] [Green Version]
  101. Palau-Saumell, R.; Forgas-Coll, S.; Sánchez-García, J.; Robres, E. User Acceptance of Mobile Apps for Restaurants: An Expanded and Extended UTAUT-2. Sustainability 2019, 11, 1210. [Google Scholar] [CrossRef] [Green Version]
  102. Tak, P.; Panwar, S. Using UTAUT 2 model to predict mobile app based shopping: Evidences from India. J. Indian Bus. Res. 2017, 9, 248–264. [Google Scholar] [CrossRef]
  103. Chang, C.-M.; Liu, L.-W.; Huang, H.-C.; Hsieh, H.-H. Factors Influencing Online Hotel Booking: Extending UTAUT2 with Age, Gender, and Experience as Moderators. Information 2019, 10, 281. [Google Scholar] [CrossRef] [Green Version]
  104. Ajzen, I. The theory of planned behavior. In Handbook of Theories of Social Psychology; Van Lange, P.A.M., Kruglanski, A.W., Higgins, E.T., Eds.; SAGE: London, UK, 2012; pp. 438–459. [Google Scholar]
  105. Zahid, H.; Din, B.H. Determinants of Intention to Adopt E-Government Services in Pakistan: An Imperative for Sustainable Development. Resources 2019, 8, 128. [Google Scholar] [CrossRef] [Green Version]
  106. Yap, C.S.; Ahmad, R.; Newaz, F.T.; Mason, C. Continuous Use Intention of E-Government Portals the Perspective of Older Citizens. Int. J. Electron. Gov. Res. 2019, 15, 1–16. [Google Scholar] [CrossRef] [Green Version]
  107. Agag, G.; El-Masry, A.A. Why Do Consumers Trust Online Travel Websites? Drivers and Outcomes of Consumer Trust toward Online Travel Websites. J. Travel Res. 2017, 56, 347–369. [Google Scholar] [CrossRef]
  108. Ahmad, S.; Bhatti, S.; Hwang, Y. E-service quality and actual use of e-banking: Explanation through the Technology Acceptance Model. Inf. Dev. 2020, 36, 503–519. [Google Scholar] [CrossRef]
  109. Spears, N.; Singh, S.N. Measuring Attitude toward the Brand and Purchase Intentions. J. Curr. Issues Res. Advert. 2004, 26, 53–66. [Google Scholar] [CrossRef]
  110. Glanz, K.; Rimer, B.K.; Viswanath, K. Health Behavior and Health Education: Theory, Research, and Practice; John Wiley & Sons: San Francisco, CA, USA, 2008. [Google Scholar]
  111. Hair, J.F.; Ringle, C.M.; Sarstedt, M. PLS-SEM: Indeed a Silver Bullet. J. Mark. Theory Pract. 2011, 19, 139–152. [Google Scholar] [CrossRef]
  112. Chin, W.W. The partial least squares approach to structural equation modeling. In Modern Methods for Business Research; Marcoulides, G.A., Ed.; Lawrence Erlbaum Associates Publishers: New York, NY, USA, 1998; Volume 295, pp. 295–336. [Google Scholar]
  113. Sekaran, U.; Bougie, R. Research Methods for Business: A Skill Building Approach; John Wiley & Sons: Chichester, UK, 2016. [Google Scholar]
  114. Bagozzi, R.P.; Yi, Y. On the evaluation of structural equation models. J. Acad. Mark. Sci. 1988, 16, 74–94. [Google Scholar] [CrossRef]
  115. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  116. Streukens, S.; Leroi-Werelds, S. Bootstrapping and PLS-SEM: A step-by-step guide to get more out of your bootstrap results. Eur. Manag. J. 2016, 34, 618–632. [Google Scholar] [CrossRef]
  117. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Routledge: New York, NY, USA, 2013. [Google Scholar]
  118. Stone, M. Cross-Validatory Choice and Assessment of Statistical Predictions. J. R. Stat. Soc. Ser. B (Methodol.) 1974, 36, 111–133. [Google Scholar] [CrossRef]
  119. Geisser, S. A predictive approach to the random effect model. Biometrika 1974, 61, 101–107. [Google Scholar] [CrossRef]
  120. Pal, D.; Roy, P.; Arpnikanondt, C.; Thapliyal, H. The effect of trust and its antecedents towards determining users’ behavioral intention with voice-based consumer electronic devices. Heliyon 2022, 8, e09271. [Google Scholar] [CrossRef]
  121. Caldarini, G.; Jaf, S.; McGarry, K. A Literature Survey of Recent Advances in Chatbots. Information 2022, 13, 41. [Google Scholar] [CrossRef]
Figure 1. The proposed research model.
Figure 1. The proposed research model.
Jtaer 18 00024 g001
Figure 2. The structural model.
Figure 2. The structural model.
Jtaer 18 00024 g002
Table 1. Measurement items and descriptive statistics.
Table 1. Measurement items and descriptive statistics.
Attitude toward using chatbots [109]
7-point Likert Scale (1—Strongly disagree to 7—Strongly agree)
ATT1: Tick the option that best describes your opinion about the use of chatbots: good/bad3.391.03
ATT2: Tick the option that best describes your opinion about the use of chatbots: favorable/unfavorable3.331.12
ATT3: Tick the option that best describes your opinion about the use of chatbots: high-quality/low-quality3.000.98
ATT4: Tick the option that best describes your opinion about the use of chatbots: positive/negative3.391.02
ATT5: Tick the option that best describes your opinion about the use of chatbots: lacks important benefits/offers important benefits3.381.03
Satisfaction [77]
5-point Likert Scale (1—Strongly disagree to 5—Strongly agree)
SAT1: I am satisfied with chatbots3.210.94
SAT2: I am content with chatbots3.170.95
SAT3: The chatbots did a good job3.280.93
SAT4: The chatbots did what I expected3.300.97
SAT5: I am happy with the chatbots3.050.92
SAT6: I was satisfied with the experience of interacting with chatbots3.270.95
Perceived Usefulness [41,55,95]
7-point Likert Scale (1—Strongly disagree to 7—Strongly agree)
PUS1: Using chatbots improves my performance2.981.01
PUS2: Using chatbots increases my productivity3.011.04
PUS3: Using chatbots enhances my effectiveness to perform tasks3.071.06
PUS4: I find chatbots useful in my daily life2.961.13
PUS5: Using chatbots enables me to accomplish tasks more quickly3.221.18
PUS6: Using chatbots would increase my efficiency3.031.13
Perceived ease of use [41,95]
7-point Likert Scale (1—Strongly disagree to 7—Strongly agree)
PEU1: My interaction with chatbots is clear and understandable3.451.03
PEU2: Interacting with chatbots does not require a lot of mental effort3.571.05
PEU3: I find chatbots to be easy to use3.780.95
PEU4: I find it easy to get the chatbots to do what I want them to do3.120.99
PEU5: It is easy for me to become skillful at using chatbots3.620.96
PEU6: I have the knowledge necessary to use chatbots3.721.01
Subjective norm [95,110]
7-point Likert Scale (1—Strongly disagree to 7—Strongly agree)
SNO1: People who influence my behavior think that I should use chatbots2.601.00
SNO2: People who are important to me will support me to use chatbots2.971.01
SNO3: People whose views I respect support the use of chatbots3.030.96
SNO4: It is expected of me to use chatbots3.121.07
SNO5: I feel under social pressure to use chatbots2.161.17
Trust [18]
7-point Likert Scale (1—Strongly disagree to 7—Strongly agree)
TRU1: I feel that the chatbots are trustworthy2.990.87
TRU2: I do not think that chatbots will act in a way that is disadvantageous to me3.100.83
TRU3: I trust in chatbots3.070.93
Perceived Social Presence [34]
7-point Likert Scale (1—Strongly disagree to 7—Strongly agree)
PSP1: I feel a sense of human contact when interacting with chatbots2.531.17
PSP2: Even though I could not see chatbots in real life, there was a sense of human warmth2.271.11
PSP3: When interacting with chatbots, there was a sense of sociability2.421.13
PSP4: I felt there was a person who was a real source of comfort to me2.261.14
PSP5: I feel there was a person who is around when I am in need2.351.19
Reuse intention [18]
7-point Likert Scale (1—Strongly disagree to 7—Strongly agree)
INT1: If I have access to chatbots, I will use it3.381.00
INT2: I think my interest in chatbots will increase in the future3.400.98
INT3: I will use chatbots as much as possible2.941.00
INT4: I plan to use chatbots in the future3.350.99
Table 2. Results of validity and reliability of measurement model (n = 201).
Table 2. Results of validity and reliability of measurement model (n = 201).
VariablesFactor LoadingsαCRAVE12345678
ATT 0.800–0.8860.920.920.740.86
CI 0.805–0.8750.910.910.710.610.84
PEU 0.509–9180.860.860.550.580.590.75
PSP 0.811–0.9840.960.960.820.330.450.240.91
PUS 0.862–0.8970.950.950.770.600.770.530.380.88
SAT 0.752–0.9580.960.960.800.700.810.650.470.700.89
SNO 0.506–0.8110.800.800.500.330.570.360.440.600.410.71
TRU 0.659–8860.840.840.650.520.560.420.370.480.560.370.80
Note one: 1. Attitude toward using chatbots (ATT); 2. Reuse intention; 3. Perceived ease of use (PEU); 4. Perceived social presence (PSP); 5. Perceived usefulness (PUS); 6. Satisfaction (SAT); 7. Subjective norm (SNO); 8. Trust (TRU). Cronbach’s Alpha (α); Composite Reliability (CR); Average variance extracted (AVE). Note two: The square root of AVE is on the diagonal (in bold).
Table 3. Results of R Square and predictive relevance (Q2).
Table 3. Results of R Square and predictive relevance (Q2).
Endogenous ConstructsR SquareQ2
Attitude toward using0.4960.339
Reuse intention0.7600.513
Perceived Usefulness0.5000.343
Table 4. Path Estimates for Proposed Model.
Table 4. Path Estimates for Proposed Model.
PathβT Valuep ValueVIFf2
Trust → Satisfaction
Trust → Perceived Usefulness
Trust → Attitude toward using
Trust → Reuse intention
Perceived Social Presence → Trust
Perceived Social Presence → Reuse intention
Satisfaction → Reuse intention
Perceived Usefulness → Attitude toward using chatbots
Perceived Usefulness → Reuse intention
Perceived Ease of Use → Perceived Usefulness
Perceived Ease of Use → Attitude toward using chatbots
Perceived Ease of Use → Reuse intention
Subjective norm → Perceived Usefulness
Subjective norm → Reuse intention
Attitude toward using chatbots → Reuse intention
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Silva, F.A.; Shojaei, A.S.; Barbosa, B. Chatbot-Based Services: A Study on Customers’ Reuse Intention. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 457-474.

AMA Style

Silva FA, Shojaei AS, Barbosa B. Chatbot-Based Services: A Study on Customers’ Reuse Intention. Journal of Theoretical and Applied Electronic Commerce Research. 2023; 18(1):457-474.

Chicago/Turabian Style

Silva, Filipe Araújo, Alireza Shabani Shojaei, and Belem Barbosa. 2023. "Chatbot-Based Services: A Study on Customers’ Reuse Intention" Journal of Theoretical and Applied Electronic Commerce Research 18, no. 1: 457-474.

Article Metrics

Back to TopTop