Next Article in Journal
A Data-Driven Informatics Framework for Regional Sustainability: Integrating Twin Mean-Variance Two-Stage DEA with Decision Analytics
Previous Article in Journal
Digitizing the Higaonon Language: A Mobile Application for Indigenous Preservation in the Philippines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Do Trusting Belief and Social Presence Matter? Service Satisfaction in Using AI Chatbots: Necessary Condition Analysis and Importance-Performance Map Analysis

by
Tai Ming Wut
1,*,
Stephanie Wing Lee
1,
Jing (Bill) Xu
2 and
Man Lung Jonathan Kwok
1
1
College of Professional and Continuing Education, The Hong Kong Polytechnic University, Hong Kong, China
2
Centre for Gaming and Tourism Studies, Macao Polytechnic University, Macao, China
*
Author to whom correspondence should be addressed.
Informatics 2025, 12(3), 91; https://doi.org/10.3390/informatics12030091
Submission received: 4 July 2025 / Revised: 17 August 2025 / Accepted: 5 September 2025 / Published: 9 September 2025

Abstract

Research indicates that perceived trust affects both behavioral intention to use chatbots and service satisfaction provided by chatbots in customer service contexts. However, it remains unclear whether perceived propensity to trust impacts service satisfaction in this context. Thus, this research aims to explore how customers’ propensity to trust influences trusting beliefs and, subsequently, their satisfaction when using chatbots for customer service. Through purposive sampling, individuals in Hong Kong with prior experience using chatbots were selected to participate in a quantitative survey. The study employed Necessary Condition Analysis, Importance-Performance Map Analysis, and Partial Least Squares Structural Equation Modelling to examine factors influencing users’ trusting beliefs toward chatbots in customer service settings. Findings revealed that trust in chatbot interactions is significantly influenced by propensity to trust technology, social presence, perceived usefulness, and perceived ease of use. Consequently, these factors, along with trusting belief, also influence service satisfaction in this context. Thus, Social Presence, Perceived Ease of Use, Propensity to Trust, Perceived Usefulness, and Trusting Belief are found necessary. By combining Importance-Performance Map Analysis, priority managerial action areas were identified. This research extends the Technology Acceptance Model by incorporating social presence, propensity to trust technology, and trusting belief in the context of AI chatbot use for customer service.

1. Introduction

Despite the increasingly widespread adoption of chatbots, customers still commonly exhibit hesitation toward their usage. Some customers have doubts about the accuracy of answers provided by chatbots, while others find their template responses unsatisfactory. At times, chatbot-generated answers may even be irrelevant or contain unnecessary information, making users rephrase their questions repeatedly and end up using more time than intended [1]. Consequently, users’ trust toward chatbots’ work performance is compromised [2]. As a matter of fact, research has indicated that individuals tend to trust humans more than chatbots [3]. Thus, dissatisfied prior users may revert to using traditional customer service delivered by humans. In fact, many prefer to call a hotline operated by human customer service representatives for customized answers, which may sometimes be more efficient than their AI counterparts. Compared to chatbots, humans can normally provide more tailor-made and precise answers in one go. Therefore, gaining deeper insight into how trust influences users’ interactions with AI chatbots is essential.
Chatbots are considered a form of artificial intelligence—they are not human beings. In today’s customer service context, with built-in algorithms, technology has enabled chatbots to provide assistance, with personalized interactions and a sense of connection [4]. In some cases, technologically advanced chatbots can even be programmed to express empathy and reflection, thereby inducing resonance among customers [5]. In turn, this creates the feeling and impression that chatbots are always there to assist and provide support, hence forming dependency and fostering trust [2].
Several recent studies have highlighted perceived trust as a key determinant shaping users’ willingness to adopt chatbots [2,6,7]. In addition, findings from Alagarsamy and Mehrolia [7] indicate that within the banking industry, perceived trust plays a notable role in driving user satisfaction.
Despite the growing adoption of chatbots in customer service, limited research has examined whether perceived propensity to trust affects service satisfaction provided by chatbots. It is thus timely to explore this relationship further. Therefore, this study aims to examine how customers’ perceived propensity to trust influences their satisfaction with chatbot-delivered services. Additionally, the role of social presence is important: Do AI chatbots need to create human-like relationships with customers? To address this, the following research question was developed:
  • Does social presence affect perceived trusting belief when using chatbots in the customer service context?
  • How does perceived trusting belief affect service satisfaction when using chatbots?
  • What are the priority areas that must be dealt with by managers, given the limited resources available?
Necessary condition analysis (NCA) has been used with partial least squares structural equation modeling (PLS-SEM) to understand not only the determinants of service satisfaction when using chatbots but also to identify essential determinants. These determinants include perceived usefulness, perceived ease of use, perceived social presence, propensity to trust technology, and trusting belief. The importance-performance map analysis (IPMA) provides priority areas for managers to address.

2. Literature Review

2.1. Social Presence Theory

Social Presence Theory has psychological roots and explores people’s sense of being with others in different communication contexts [8]. It originates from Short et al.’s [9] views regarding social psychology in telecommunications. Communication in society can take different forms, including face-to-face non-mediated communications, as well as mediated communications. It has been argued that the former generates more social presence than the latter in different circumstances [10]. However, in today’s commercial world, there are many discussions and applications of Social Presence Theory when people or customers are involved in mediated communications. The use of online technology such as AI chatbots has facilitated this line of thinking and discussion.
Perceived existence and salience are associated with perceived quality of the communication [11], quality of interactions between people (or customers in commerce or business) and the signal/message conveyers (or merchants in commerce or business), or between people themselves [12]. Companies may often encourage clients to share experiences to create a communication platform that is driven by interaction, which can, in turn, be facilitated through social networking sites. Gefen and Straub [13] emphasized that social presence comprises several dimensions, including human warmth, sociability, personalization, and a feeling of human contact experienced during interactions with other individuals, products, or the environment.
A comprehensive analysis of research on AI-based chatbots revealed that prior studies had not fully applied Social Presence Theory [2,14]. To address this limitation, social presence was integrated into the conceptual framework proposed by Prakash et al. [2]. However, they utilized Social Response Theory instead of Social Presence Theory to explain social presence phenomena in AI-based chatbot use. In contrast, our study suggests that it is more appropriate to apply Social Presence Theory to the interactive environment and context of AI-based chatbots.

2.2. Technology Acceptance Theory

The Technology Acceptance Theory and its corresponding model have been widely adopted to study information systems and the use of advanced technologies [15,16]. In our study, Technology Acceptance Theory provides an additional theoretical base alongside Social Presence Theory in discussing people’s perceptions of and responses to using AI-based service chatbots. Although it has been adopted in this context [17], there is room for extending the theoretical application when discussing whether people accept the use of chatbots for different purposes.
According to Kesharwani and Bisht [18], two central constructs within the theory of technology acceptance are perceived ease of use (EASE) and perceived usefulness (USEFUL). Perceived usefulness focuses on whether people or users find the focal technologies able to meet their needs, such as information, payment, efficiency, and other add-on benefits [15]. Users will assess whether the technologies are acceptable and helpful when performing various activities. Perceived ease of use assesses the process and outcome of technology use from another perspective. It is an important construct in the e-commerce world, stressing the user-friendly interface, interaction, and convenience of information searching and sharing [19]. The mediated communication channel using service chatbots should be considered highly user-friendly and controllable. USEFUL and EASE are the basic functional attributes of chatbot technologies from the user or demand perspective. In accordance with Zhang et al. [20], the former can include attributes such as perceptions regarding informativeness, interactivity, and personalization, while the latter can relate to perceptions regarding accessibility. Our study adopts both to represent customers’ perceptions of service chatbot functions and attributes.
There have been some systematic reviews of Technology Acceptance Theory and its model over the past decade in understanding various new technologies and the contributing factors that are involved [21,22,23]. For instance, Momani and Jamous [23] summarized three steps in developing the Technology Acceptance Model: adoption, validation, and extension. The first stage witnessed many applications of the theory and the model, while the second stage focused on accurate measurements in different technologies. The third stage aimed to extend the model by introducing new variables and relationships, including attitudes, behavior (e.g., technology use), behavioral intention, adaptability, flexibility, previous technology experience, perceived value, and perceived playfulness [21,24].
Trust, despite being a well-researched construct [25], highly relevant to customer satisfaction in service industries, has not been thoroughly examined in service robot research [2,26]. Trust is established based on the trustee’s performance in meeting the trustor’s expectations [27]. Propensity to trust is an individual difference characteristic reflecting one’s general tendency to trust others [27].
Furthermore, previous review studies have identified limitations, including research scope and methods employed in the literature search and integration. It has been pinpointed that generalizability represents a major shortcoming in many studies they reviewed [28]. Nonetheless, they have listed or implied some future directions for the application and extension of Technology Acceptance Theory. First, a new set of variables (such as trust) can be included to further test the model and the complex relationships in different settings (such as AI-powered service chatbots). These can help validate and extend the model in alignment with future technological development trends. Second, different stakeholders, such as customers and employees, can offer their inputs to the validation and extension of the Technology Acceptance Model. Third, to better understand distinct social and business phenomena related to interactions with emerging technologies such as artificial intelligence, scholars can draw from diverse theoretical perspectives originating in fields such as anthropology, psychology, and sociology. For instance, variables from the theory of planned behavior, such as perceived behavioral control and subjective norms, can be effectively incorporated into the Technology Acceptance Model [24]. Furthermore, social presence and influence suggest that technology users not only assess whether the technologies in question are acceptable but also examine their own experiences and perceptions when interacting with the technologies and interfaces. Similarly, the integration of theoretical applications can be considered a trend for understanding and improving the applications of various technologies. The subsequent section outlines the hypotheses and research framework, highlighting the value of our theoretical integration and the various interesting variables that facilitate theoretical extension.

2.3. Research Framework and Development of Hypotheses

This study integrates both Social Presence Theory [9] and the Technology Acceptance Model [16,20] in discussing the perceptions of customers toward AI chatbots. On one hand, through the application of Technology Acceptance Theory, customers make assessments of the usefulness and ease of use of AI chatbots. In other words, the acceptance level of AI chatbots will be evaluated. On the other hand, in line with Social Presence Theory, customers’ perception of being with others (even if not humans) is assessed by emphasizing social presence when interacting with chatbots. Given the integration of Social Presence Theory and Technology Acceptance Theory, the understanding of mediated communications and interactions with chatbots involves the appraisal of the technological media and the perceived outcome of social presence between humans and technology, or AI chatbots. This enriches the literature by providing a framework for analyzing AI communication from technical and social perspectives.
This study provides further support to the findings of Prakash et al. [2] and highlights the relationship between users’ trust in technology and their personality and consumption characteristics. Some users’ mistrust of technology may extend to aspects including interface, interaction, and explanation [29]. However, in order to form a long-term, sustainable relationship between customers and technology, customers’ trust in AI is an essential foundation [30]. Moreover, trust in technology may vary significantly among people with different experiences and personality traits. Some are open to new experiences, while others are reluctant to accept them [31]. In this study, it is worthwhile to examine customers’ propensity to trust before evaluating outcomes of chatbot use, including their trusting belief and satisfaction [31].
Across various contexts, perceived ease of use influences the overall evaluation of technology use [3,18]. This not only enhances user satisfaction but also builds trust in technology [2]. To better predict the outcomes of trust and satisfaction, we have thus integrated core elements from Social Presence Theory, the Technology Acceptance Model, and the trust model in technology [2] in our framework (Figure 1). In our study, we propose that the greater the ease of use in customers’ perceptions of AI chatbots, the greater the level of service satisfaction as well as trust in technology use. Therefore, the following hypotheses were proposed:
H1: 
Perceived ease of use is positively associated with service satisfaction.
H2: 
Perceived ease of use is positively associated with trusting belief.
According to our preceding discussions, customers’ propensity to trust should be included in the model. It is a necessary determinant of customer satisfaction and trust. Propensity to trust is considered an important prerequisite for shaping trust, highlighting its vital role in the trust-building model [32]. It is also a contributing factor to general service satisfaction. Previous evidence has shown that individuals with high trust propensity have higher consumer satisfaction [32]. This is probably applicable in assessing chatbot use in our study. Therefore, the following hypotheses are developed:
H3: 
Propensity to trust technology is associated with service satisfaction.
H4: 
Propensity to trust technology is associated with trusting belief.
Social presence is considered an influential factor in driving customer satisfaction and trust. Customers’ trust in technology increases when they perceive high-quality communication and interaction [2]. The relationship between social presence and trust has been established in multiple domains [11,33], including chatbot use [34]. More specifically, social presence has been shown to increase customer satisfaction. According to Ogara et al. [35], the perception of social presence during instant messaging interactions increases user satisfaction. Following similar reasoning, it is plausible to extend this relationship to interactions involving chatbots. Drawing upon this reasoning, the hypotheses presented in this study are formulated as follows:
H5: 
Social presence can result in service satisfaction.
H6: 
Social presence can result in trusting belief.
Another technology acceptance-driven determinant in our framework is perceived usefulness. It is appropriate to postulate a positive association between customers’ overall evaluation of chatbot use and perceived usefulness. Within the service and marketing literature, customer trust can generate satisfaction in the loyalty-building process [36]. In our study, if customers trust the chatbot, it is likely that they will be satisfied in the long run. Here, perceived usefulness can help lead to customer satisfaction. For example, Han and Sa [37] applied the Technology Acceptance Model and found that students’ perceived usefulness of the technology directly influenced their satisfaction with online classes. Additionally, it has been shown that a relationship between customer trust and perceived usefulness exists [38]. In addition, customers’ trust in the medium (e.g., technology use) can be attributed to their perceived usefulness [39]. Given the previous discussions, the relevant hypotheses were developed:
H7: 
Trusting belief will result in service satisfaction.
H8: 
Perceived usefulness will result in service satisfaction.
H9: 
Perceived usefulness will result in trusting belief.

3. Materials and Methods

3.1. Methodology

In 2024, individuals in Hong Kong with experience in chatbot use were selected via purposive sampling to participate in a quantitative survey. Participants provided their informed consent prior to taking part in the study. To screen out ineligible participants, a qualifying question about prior chatbot usage in customer service contexts over the past three months was included. Quota sampling was employed to enhance sample representativeness, with age and sex as criteria. Approximately 60% of respondents were female and aged 18–40, 30% were 41–60, and 10% were over 60. A total of 200 people were approached, and finally, 158 responses were collected. The response rate was 79%. They were given a USD 6 supermarket coupon as an incentive in order to compensate their time and effort doing the questionnaire. The effect of using an incentive is to encourage respondents to complete the questionnaire responsibly. The amount is small relative to the local living standard, so it would not create any extra social desirability effect. Constructs from validated scales were included and measured. The constructs of perceived ease of use, perceived usefulness, and social presence were adapted from Gefen and Straub [40], whereas the constructs of trusting belief and propensity to trust technology originated from Lankton et al. [41]. Additionally, the customer satisfaction construct was adapted based on Ashfaq et al. [42]. All items for these constructs were assessed using a seven-point Likert scale, ranging from 1 (strongly disagree) to 7 (strongly agree). The established scales were adapted for the customer service context using AI chatbots. The Chinese translation of the questionnaire was listed side by side. The back-translation method was used to ensure the meanings remained consistent.

3.2. Data Collection

This study involved 158 participants. A minimum sample size was determined using the G*Power 3.1.9.7 analysis tool, indicating that at least 89 participants were required based on five predictors, an effect size of 0.15, and a 95 percent alpha level. An effect size of 0.15 was used, representing a small effect. Table 1 provides a summary of the sample’s demographic characteristics. A total of 39.2% of the sample were men, while 60.8% were women. Regarding age, 26.6% of respondents were aged 41–50, 38.6% aged 18–30, 5.7% aged 51–60, 19.6% aged 31–40, and 9.5% were above 61. A total of 43% of the respondents worked in large companies (>100 employees); 33.5% were professionals, 9.5% worked in retailing and customer service, 5.1% were civil servants, 5.7% worked in the tourism industry, 3.8% worked in the cultural and creative industry, 1.9% worked in the trading and logistics industry, and 3.2% worked in the financial services sector; 18.3% were retired and 5.1% on a career break. The demographic composition of variables such as age and gender was generally consistent with that reported by the Census and Statistics Department [43] of the Hong Kong government.

3.3. Analysis Method

Partial least squares structural equation modeling (PLS-SEM) was used in the analysis because of the predictive nature of our model. It has been widely used in satisfaction models and technology acceptance frameworks. Works are mainly related to ‘consumer behavior’ and ‘trust’ [44]. IPMA has been used with structural equation modeling (SEM) to examine the effects of chatbots’ attributes on customer relationships [45]. PLS-SEM provides sufficient evidence on independent variables. However, NCA has not yet been employed to explore necessary conditions. NCA provides evidence for the necessity logic. IPMA was used to assess variables’ importance and performance [46]. This research employed NCA and SEM to achieve a holistic understanding. Specifically, IPMA was further incorporated to provide an advanced approach to the decision-making process, termed combined importance-performance map analysis (CIPMA) [46].

4. Results

All indicator loadings exceeded the 0.708 threshold for Cronbach’s alpha, showing good internal consistency reliability. Good convergent validity was supported by Average Variance Extracted (AVE) values > 0.50. The composite reliability (rho a) was used, as it reflects the construct’s internal consistency well (Table 2), meaning the constructs have good reliability in our measurement model [47].
Table 3 shows that constructs of our measurement model have good discriminant validity (with all values below the 0.85 threshold).
The hypotheses testing results are shown in Table 4:
In Table 4, the hypotheses testing results indicate that eight out of nine hypotheses were supported. Hypothesis three, which proposed an association between perceived propensity to trust and satisfaction, was not supported. The confidence interval contains zero. The effect sizes for all supported hypotheses were small (Table 5). Figure 2 shows the structural model. The R-square values for trusting belief and service satisfaction are 54.1% and 64.6%, respectively. Our model has moderate explanatory power. The f-squared effect sizes ranged from 0.035 to 0.141 (except for the unsupported results), i.e., showing small to medium effect sizes on service satisfaction. The Q-squared values ranged from 0.523 to 0.561, indicating large predictive relevance for the dependent construct—service satisfaction. Finally, the collinearity statistics (VIF) pertaining to the inner model are all less than 3 (Table 6).
The structural model is presented below (Figure 2).
Hypothesis testing results indicated that all the independent variables were sufficient to affect service satisfaction, except for propensity to trust technology, which can be attributed to the mediation effect of trusting belief. Propensity to trust technology affects service satisfaction via trusting belief. Willingness to rely on technology cannot establish service satisfaction; trust in technology must be established first (Figure 3). The direct effect is 0.033, which is not significant. The indirect effects are 0.141 and 0.255 (Figure 2), resulting in a total indirect effect of 0.036. The total effect is 0.069.
Since all the outer weights of the model are positive, we can continue our analysis by following step five from the guideline provided by Richter et al. [48]. Using the standard importance-performance map, performance data for the antecedent constructs can be obtained (Figure 4).
Importance data have been captured by the total effect, which refers to the combined value of indirect and direct effects. The indicator data contributes to the latent variable scores and performance [49]. The importance of perceived usefulness is much higher than that of perceived ease of use. The performance of social presence is lower than that of other variables (Figure 5).
Upon completion of IPMA, we proceed to step five using latent variable scores. Following steps six to eight of the procedure, NCA was conducted. By checking the bottleneck table (Table 7) using a satisfaction outcome level of 85, the percentage of cases that do not meet the necessary condition can be obtained. Finally, all the necessary effect sizes are larger than 0, and the p-values are smaller than 0.05 (Table 8).
The desired level of service satisfaction is set at 85 out of 100, based on the common understanding of what constitutes an acceptable service satisfaction result [46]. The combined IPMA was prepared using the desired level of service satisfaction (Figure 6). All the independent variables are white circles, indicating they are the necessary conditions for the desired level. Perceived usefulness of chatbots has the highest importance rating, followed by ease of use of chatbots. Propensity to trust technology has the least importance. In terms of performance, perceived ease of use has the highest performance rating, followed by propensity to trust technology. There are cases that do not meet the necessary conditions, especially for perceived ease of use and perceived usefulness (larger white circles), which belong to the prioritized action areas. In the lower left-hand corner of the combined IPMA map (Figure 6), social presence also appears as a prioritized action area due to unmet conditions, indicated by a large white circle and lower performance.
To achieve the desired level of service satisfaction of 85%, the corresponding levels are as follows: perceived ease of use at 54%, propensity to trust technology at 28%, social presence at 30%, trusting belief at 25%, and perceived usefulness at 50%, according to the bottleneck table. However, for a desired level of service satisfaction of 80% or lower, social presence is not necessary (Table 7).

5. Discussion

By incorporating propensity to trust technology, social presence, and trusting belief into the Technology Acceptance Model, our study enriches current literature in the context of AI chatbot usage for customer service.
Instead of behavioral intention as an outcome variable, service satisfaction was selected in the customer service context. The modified Technology Acceptance Model was supported by our empirical data. Higher trust propensity is associated with higher consumer satisfaction [32]. However, the association between propensity to trust technology and consumer satisfaction was not supported in our study. Instead, the linkage between propensity to trust technology and consumer satisfaction is through trusting belief. It is more logical to the researchers because trust is essential when people face a new technological environment.
From the PLS-SEM structural model results, trusting belief, perceived usefulness, and perceived ease of use contribute to customer satisfaction with significant path coefficients with larger values compared to social presence. In a competitive marketplace, we all aim for almost perfect service satisfaction, thus social presence is essential. Based on Hauff et al. [46], there are some managerial recommendations. In the upper right-hand corner of the combined IPMA map (Figure 5), the white circles for trusting belief, perceived ease of use, and perceived usefulness indicate prioritized action areas.
In high service satisfaction standards (e.g., 85%), perceived ease of use, propensity to trust, social presence, trusting belief, and perceived usefulness are necessary. In contrast, at medium service satisfaction standards (e.g., 80% or below), social presence is not necessary. This means social presence appears in the model for high service satisfaction standards, but not for medium or low standards.

6. Conclusions

Within the customer service context, factors such as perceived usefulness, perceived ease of use, social presence, and an individual’s propensity to trust technology collectively influence users’ trusting beliefs toward chatbots. Additionally, customer satisfaction with chatbot-based services is influenced by perceived ease of use and usefulness, as well as by social presence and trusting beliefs. To achieve the 85% service satisfaction standard, common in the service industry, all independent variables are necessary. The combined IPMA map identifies prioritized action areas as social presence, perceived usefulness, and perceived ease of use.
Practical measures are needed to improve chatbots’ perceived usefulness, ease of use, and social presence. Humor could enhance chatbots’ social presence. More functions with friendly interfaces could improve perceived usefulness. Showing suggested alternative questions may increase perceived ease of use by reducing users’ need to type exact questions.
Future research could extend from AI chatbots to robots. As this is a cross-sectional study, longitudinal research would better capture service satisfaction and its antecedents. Comparing AI chatbots with human agents in customer service contexts presents another research opportunity.

Author Contributions

Conceptualization, T.M.W.; methodology, T.M.W.; software, T.M.W.; validation, T.M.W.; formal analysis, T.M.W.; investigation, T.M.W.; resources, J.X.; data curation, T.M.W.; writing—original draft preparation, T.M.W.; writing—review and editing, S.W.L.; visualization, S.W.L.; supervision, M.L.J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The ethical review for research was approved by the College of Professional and Continuing Education (RC/ETH/H).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data can be requested from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Murtarelli, G.; Gregory, A.; Romenti, S. A conversation-based perspective for shaping ethical human–machine interactions: The particular challenge of chatbots. J. Bus. Res. 2021, 129, 927–935. [Google Scholar] [CrossRef]
  2. Prakash, A.V.; Joshi, A.; Nim, S.; Das, S. Determinants and consequences of trust in AI-based customer service chatbots. Serv. Ind. J. 2023, 43, 642–675. [Google Scholar] [CrossRef]
  3. Wang, C.; Li, Y.; Fu, W.; Jin, J. Whether to trust chatbots: Applying the event-related approach to understand consumers’ emotional experiences in interactions with chatbots in e-commerce. J. Retail. Consum. Serv. 2023, 73, 103325. [Google Scholar] [CrossRef]
  4. Lalicic, L.; Weismayer, C. Consumers’ reasons and perceived value co-creation of using artificial intelligence-enabled travel service agents. J. Bus. Res. 2021, 129, 891–901. [Google Scholar] [CrossRef]
  5. Kim, W.; Hur, H. What Makes People feel Empathy for AI Chatbots? Assessing the role of Competence and Warmth. Int. J. Hum. Comput. Interact. 2023, 40, 4674–4687. [Google Scholar] [CrossRef]
  6. Rahim, N.; Iahad, N.; Yusof, A.; Al-Sharafi, M. AI-Based Chatbots Adoption Model for Higher-Education Institutions: A Hybrid PLS-SEM-Neural Network Modelling Approach. Sustainability 2022, 14, 12726. [Google Scholar] [CrossRef]
  7. Alagarsamy, S.; Mehrolia, S. Exploring chatbots trust: Antecedents and behavioural outcomes. Heliyon 2023, 9, e16074. [Google Scholar] [CrossRef]
  8. Sallnas, E.L. Effects of communication mode on social presence, virtual presence, and performance in collaborative virtual environments. Presence Teleoper. Virtual Environ. 2005, 14, 434–449. [Google Scholar] [CrossRef]
  9. Short, J.; Williams, E.; Christie, B. The Social Psychology of Telecommunications; Wiley: London, UK, 1976. [Google Scholar]
  10. Chang, C.-M.; Hsu, M.-H. Understanding the determinants of users’ subjective well-being in social networking sites: An integration of social capital theory and social presence theory. Behav. Inf. Technol. 2016, 35, 720–729. [Google Scholar] [CrossRef]
  11. Lu, B.; Fan, W.; Zhou, M. Social presence, trust, and social commerce purchase intention: An empirical research. Comput. Hum. Behav. 2016, 56, 225–237. [Google Scholar] [CrossRef]
  12. Jiang, C.; Rashid, R.M.; Wang, J. Investigating the role of social presence dimensions and information support on consumers’ trust and shopping intentions. J. Retail. Consum. Serv. 2019, 51, 263–270. [Google Scholar] [CrossRef]
  13. Gefan, D.; Straub, D. Managing user trust in B2C e-services. e-Serv. J. 2003, 2, 7–24. [Google Scholar] [CrossRef]
  14. Xie, Y.; Liang, C.; Zhou, P.; Zhu, J. When should chatbots express humor? Exploring different influence mechanisms of humor on service satisfaction. Comput. Hum. Behav. 2024, 156, 108238. [Google Scholar] [CrossRef]
  15. Venkatesh, V.; Bala, H. Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
  16. Yusoff, Y.M.; Muhammad, Z.; Zahari, M.S.M.; Pasah, E.S.; Robert, E. Individual differences, perceived ease of use and perceived usefulness in the e-library usage. Comput. Inf. Sci. 2009, 2, 76–83. [Google Scholar] [CrossRef]
  17. Pitardi, V.; Marriott, H.R. Alexa, she’s not human but…Unveiling the drivers of consumers’ trust in voice-based artificial intelligence. Psychol. Mark. 2021, 38, 626–642. [Google Scholar] [CrossRef]
  18. Kesharwani, A.; Bisht, S.S. The impact of trust and perceived risk on internet banking adoption in India. An extension of technology acceptance model. Int. J. Bank Mark. 2012, 30, 303–322. [Google Scholar] [CrossRef]
  19. Corritore, C.; Kracher, B.; Wiedenbeck, S. On-line trust: Concepts, evolving themes, a model. Int. J. Hum. Comput. Stud. 2003, 58, 737–758. [Google Scholar] [CrossRef]
  20. Zhang, X.; Tavitiyaman, P.; Xu, J.; Tsui, B. Effect of smart technology attributes on work-related behaviors through perceived usefulness: Moderating roles of perceived risk and hotel affiliation. Tour. Hosp. Res. 2024, 14673584241284980. [Google Scholar] [CrossRef]
  21. AI-Nuaimi, M.; AI-Emran, M. Learning management systems and technology acceptance models: A systematic review. Educ. Inf. Technol. 2021, 26, 5499–5533. [Google Scholar] [CrossRef]
  22. Momani, A.M. The unified theory of acceptance and use of technology: A new approach in technology acceptance. Int. J. Sociotechnol. Knowl. Dev. 2020, 12, 20. [Google Scholar] [CrossRef]
  23. Momani, A.M.; Jamous, M.M. The evolution of technology acceptance theories. Int. J. Contemp. Comput. Res. 2017, 1, 51–58. [Google Scholar]
  24. Marangunić, N.; Granić, A. Technology acceptance model: A literature review from 1986. Univers. Access Inf. Soc. 2015, 14, 81–95. [Google Scholar] [CrossRef]
  25. Salloum, S.A.; Al-Emran, M.; Shaalan, K.; Tarhini, A. Factors affecting the E-learning acceptance: A case study from UAE. Educ. Inf. Technol. 2019, 24, 509–530. [Google Scholar] [CrossRef]
  26. Ranieri, A.; Bernardo, I.; Mele, C. Serving customers through chatbots: Positive and negative effects on customer experience. J. Serv. Theory Pract. 2024, 34, 191–215. [Google Scholar] [CrossRef]
  27. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  28. Granić, A.; Marangunić, N. Technology acceptance model in educational context: A systematic literature review. Br. J. Educ. Technol. 2019, 50, 2572–2593. [Google Scholar] [CrossRef]
  29. Zierau, N.; Engel, C.; Söllner, M.; Leimeister, J.M. Trust in smart personal assistants: A systematic literature review and development of a research agenda. In Proceedings of the 15th International Conference on Wirtschaftsinformatik, Potsdam, Germany, 8–11 March 2020; pp. 99–114. [Google Scholar]
  30. Cheng, X.; Zhang, X.; Cohen, J.; Mou, J. Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Inf. Process. Manag. 2022, 59, 102940. [Google Scholar] [CrossRef]
  31. Nordheim, C.B.; Følstad, A.; Bjørkli, C.A. An initial model of trust in chatbots for customer service—Findings from a questionnaire study. Interact. Comput. 2019, 31, 317–335. [Google Scholar] [CrossRef]
  32. Westjohn, S.; Magnusson, P.; Franke, G.; Peng, Y. Trust propensity across cultures: The role of collectivism. J. Int. Mark. 2022, 30, 1–17. [Google Scholar] [CrossRef]
  33. Hassanein, K.; Head, M.; Chunhua, J. Across-cultural comparison of the impact of social presence on website trust, usefulness and enjoyment. Int. J. Electron. Bus. 2009, 7, 625–641. [Google Scholar] [CrossRef]
  34. Konya-Baumbach, E.; Biller, M.; von Janda, S. Someone out there? A study on the social presence of anthropomorphized chatbots. Comput. Hum. Behav. 2023, 139, 107513. [Google Scholar] [CrossRef]
  35. Ogara, S.O.; Koh, C.E.; Prybutok, V.R. Investigating factors affecting social presence and user satisfaction with mobile instant messaging. Comput. Hum. Behav. 2014, 36, 453–459. [Google Scholar] [CrossRef]
  36. Sitorus, T.; Yustisia, M. The influence of service quality and customer trust toward customer loyalty: The role of customer satisfaction. Int. J. Qual. Res. 2018, 12, 639–654. [Google Scholar]
  37. Han, J.-H.; Sa, H.J. Acceptance of and satisfaction with online educational classes through the technology acceptance model (TAM): The COVID-19 situation in Korea. Asia Pac. Educ. Rev. 2022, 23, 403–415. [Google Scholar] [CrossRef]
  38. Mou, J.M.; Shin, D.-H.; Cohen, J. Understanding trust and perceived usefulness in the consumer acceptance of an e-service: A longitudinal investigation. Behav. Inf. Technol. 2017, 36, 125–139. [Google Scholar] [CrossRef]
  39. Harrigan, M.; Feddema, K.; Wang, S.; Harrigan, P.; Diot, E. How trust leads to online purchase intention founded in perceived usefulness and peer communication. J. Consum. Behav. 2021, 20, 1297–1312. [Google Scholar] [CrossRef]
  40. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in online shopping: An integrated model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  41. Lankton, K.; Harrison, D.; Tripp, J. Technology, humanness and trust: Rethinking trust in technology. J. Assoc. Inf. Syst. 2015, 16, 880. [Google Scholar] [CrossRef]
  42. Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, C. I, Chabot modeling the determinates of users’ satisfaction and continuance intention of AI-powered service agents. Telmatics Inf. 2020, 54, 101473. [Google Scholar] [CrossRef]
  43. Census and Statistics Department. Hong Kong Annual Digest of Statistics. 2023. Available online: www.censtatd.gov.hk (accessed on 1 June 2025).
  44. Angelelli, M.; Ciavolino, E.; Ringle, C.; Sarstedt, M.; Aria, M. Conceptual structure and thematic evolution in partial least squares structure equation modeling research. Qual. Quant. 2025, 59, 2753–2798. [Google Scholar] [CrossRef]
  45. Magno, F.; Dossena, G. The effects of chatbots’ attributes on customer relationships with brands: PLS-SEM and importance-performance map analysis. TQM J. 2023, 35, 1156–1169. [Google Scholar] [CrossRef]
  46. Hauff, S.; Richter, N.; Sarstedt, M.; Ringle, C. Importance and performance in PLS-SEM and NCA: Introducing the combined importance-performance map analysis (cIPMA). J. Retail. Consum. Serv. 2024, 78, 103723. [Google Scholar] [CrossRef]
  47. Hair, J.; Risher, J.; Sarstedt, M.; Ringle, C. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  48. Richter, N.F.; Schubring, S.; Hauff, S.; Ringle, C.M.; Sarstedt, M. When Predictors of Outcomes are Necessary: Guidelines for the Combined Use of PLS-SEM and NCA. Ind. Manag. Data Syst. 2020, 120, 2243–2267. [Google Scholar] [CrossRef]
  49. Sarstedt, M.; Richter, N.; Hauff, S.; Ringle, C. Combined importance-performance map analysis (cIPMA) in partial least squares structural equation modeling (PLS-SEM): A SmartPLS 4 tutorial. J. Mark. Anal. 2024, 12, 746–760. [Google Scholar] [CrossRef]
Figure 1. Research model.
Figure 1. Research model.
Informatics 12 00091 g001
Figure 2. Structural model.
Figure 2. Structural model.
Informatics 12 00091 g002
Figure 3. Full mediation effect.
Figure 3. Full mediation effect.
Informatics 12 00091 g003
Figure 4. The importance-performance graph.
Figure 4. The importance-performance graph.
Informatics 12 00091 g004
Figure 5. Importance-performance map.
Figure 5. Importance-performance map.
Informatics 12 00091 g005
Figure 6. Combined IPMA.
Figure 6. Combined IPMA.
Informatics 12 00091 g006
Table 1. Demographic data.
Table 1. Demographic data.
Category FrequencyPercentage %
GenderMale6239.2
Female9660.8
Age18–306138.6
31–403119.6
41–504226.6
51–6095.7
61 or above159.5
Organization sizeLess than 5 63.8
5–20 1610.1
21–50 1610.1
51–100 127.6
101 or above6843.0
N/A4025.3
IndustryCareer break85.1
Civil servant85.1
Cultural and creative63.8
Financial services53.2
Professionals5333.5
Retailing and customer service159.5
Retired2918.3
Student2213.9
Tourism95.7
Trading & Logistics31.9
Table 2. Validity and reliability of measurement model.
Table 2. Validity and reliability of measurement model.
Scale and ItemAverage Variance ExtractedComposite Reliability (rho a)Cronbach’s Alpha
Perceived usefulness (USEFUL)0.8720.9310.927
Perceived ease of use (EASE)0.8020.9050.878
Social presence (SOCIAL)0.8340.9380.934
Propensity to trust technology (PROP)0.8080.8870.881
Trusting belief (TRUST)0.6630.9130.898
Service satisfaction (SATISFACTION)0.9100.9670.967
Table 3. Discriminant validity of the model.
Table 3. Discriminant validity of the model.
MeanStdEastPropSatisFactionSocialTrustUseful
Perceived ease of use (EASE)4.3951.326
Propensity to trust technology (PROP)4.3771.2450.426
Service satisfaction (SATISFACTION)3.8291.3960.7470.381
Social presence (SOCIAL)3.0661.4120.5990.3180.607
Trust belief (TRUST)4.0571.1620.7000.4960.7050.576
Perceived usefulness (USEFUL)4.0831.4590.7660.3210.7620.5850.665
Table 4. Hypotheses testing results.
Table 4. Hypotheses testing results.
PathHypothesisPath Coefficientt-Statisticp-ValueLCI (2.5%)UCI (97.5%)Support
EASE  -> SATISFACTIONH10.249 2.879 0.004 ***0.0780.420Yes
EASE -> TRUSTH20.264 3.791 <0.001 *** 0.1260.405Yes
PROP  -> SATISFACTIONH30.033 0.499 0.618 −0.0970.163No
PROP  -> TRUSTH40.185 3.114 0.002 ** 0.0700.305Yes
SOCIAL -> SATISFACTIONH50.141 2.045 0.041 * 0.0080.280Yes
SOCIAL -> TRUSTH60.139 2.255 0.024 * 0.0170.264Yes
TRUST -> SATISFACTIONH70.255 2.925 0.003 ** 0.0810.424Yes
USEFUL -> SATISFACTIONH80.323 4.175 <0.001 *** 0.1660.470Yes
USEFUL -> TRUSTH90.210 2.992 0.003 ** 0.0670.343Yes
Note: p < 0.001 ***; p < 0.01 **; p < 0.05 *.
Table 5. Effect sizes.
Table 5. Effect sizes.
PathHypothesisPath CoefficientEffect Size (f2)Effect
EASE -> SATISFACTIONH10.249 0.064Small
EASE  -> TRUSTH20.264 0.088Small
PROP  -> SATISFACTIONH30.033 0.002N/A
PROP -> TRUSTH40.185 0.072Small
SOCIAL -> SATISFACTIONH50.141 0.035Small
SOCIAL -> TRUSTH60.139 0.040Small
TRUST -> SATISFACTIONH70.255 0.059Small
USEFUL -> SATISFACTIONH80.323 0.141Small
USEFUL -> TRUSTH90.210 0.071Small
Table 6. Collinearity statistics (VIF)–inner model.
Table 6. Collinearity statistics (VIF)–inner model.
PathVIF
EASE -> SATISFACTION2.462
EASE -> TRUST2.264
PROP -> SATISFACTION1.266
PROP -> TRUST1.181
SOCIAL -> SATISFACTION1.626
SOCIAL -> TRUST1.563
TRUST -> SATISFACTION2.178
USEFUL -> SATISFACTION2.287
USEFUL -> TRUST2.135
Table 7. Bottleneck table for service satisfaction.
Table 7. Bottleneck table for service satisfaction.
LV Scores—SATISFACTION LV Scores—EASE LV Scores—PROP LV Scores—SOCIAL LV Scores—TRUST LV Scores—USEFUL
0.000% 0.000 NN NN NN NN NN
5.000% 5.000 NN 5.347 NN NN NN
10.000% 10.000 NN 5.347 NN NN NN
15.000% 15.000 NN 5.347 NN NN NN
20.000% 20.000 4.868 10.695 NN 11.340 5.968
25.000% 25.000 16.667 10.695 NN 11.340 5.968
30.000% 30.000 16.667 10.695 NN 11.340 16.667
35.000% 35.000 16.667 10.695 NN 11.340 16.667
40.000% 40.000 26.984 10.695 NN 11.340 16.667
45.000% 45.000 26.984 10.695 NN 11.340 16.667
50.000% 50.000 26.984 10.695 NN 11.340 16.667
55.000% 55.000 26.984 10.695 NN 11.340 16.667
60.000% 60.000 30.951 18.253 NN 11.340 16.667
65.000% 65.000 30.951 28.098 NN 24.558 16.667
70.000% 70.000 50.000 28.098 NN 24.558 16.667
75.000% 75.000 53.967 28.098 NN 24.558 16.667
80.000% 80.000 53.967 28.098 NN 24.558 16.667
85.000% 85.000 53.967 28.098 30.185 24.558 50.000
90.000% 90.000 60.898 50.000 30.185 44.331 55.655
95.000% 95.000 60.898 50.000 47.612 44.331 55.655
100.000% 100.000 81.852 50.000 50.199 56.051 55.655
Table 8. cIPMA results.
Table 8. cIPMA results.
Antecedent ConstructImportancePerformancePercentage of Cases That Do Not Meet the Necessary Condition *Necessity Effect Size d (p Value)
Perceived usefulness0.37651.39139.8730.197 (0.000)
Perceived ease of use0.31756.58633.5440.302 (0.000)
Trusting belief0.2550.9488.2280.174 (0.000)
Social presence0.17634.43447.4680.068 (0.021)
Propensity to trust technology0.0856.299.4940.194 (0.001)
* based on a satisfaction outcome level of 85.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wut, T.M.; Lee, S.W.; Xu, J.; Kwok, M.L.J. Do Trusting Belief and Social Presence Matter? Service Satisfaction in Using AI Chatbots: Necessary Condition Analysis and Importance-Performance Map Analysis. Informatics 2025, 12, 91. https://doi.org/10.3390/informatics12030091

AMA Style

Wut TM, Lee SW, Xu J, Kwok MLJ. Do Trusting Belief and Social Presence Matter? Service Satisfaction in Using AI Chatbots: Necessary Condition Analysis and Importance-Performance Map Analysis. Informatics. 2025; 12(3):91. https://doi.org/10.3390/informatics12030091

Chicago/Turabian Style

Wut, Tai Ming, Stephanie Wing Lee, Jing (Bill) Xu, and Man Lung Jonathan Kwok. 2025. "Do Trusting Belief and Social Presence Matter? Service Satisfaction in Using AI Chatbots: Necessary Condition Analysis and Importance-Performance Map Analysis" Informatics 12, no. 3: 91. https://doi.org/10.3390/informatics12030091

APA Style

Wut, T. M., Lee, S. W., Xu, J., & Kwok, M. L. J. (2025). Do Trusting Belief and Social Presence Matter? Service Satisfaction in Using AI Chatbots: Necessary Condition Analysis and Importance-Performance Map Analysis. Informatics, 12(3), 91. https://doi.org/10.3390/informatics12030091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop