Next Article in Journal
Exploring the Customer Experience Regarding AI-Powered Fintech Chatbots in Terms of SOR Theory
Previous Article in Journal
Comprehensive Privacy Awareness Framework (CPAF): Assessing Privacy Awareness of Saudi E-Commerce Users
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Elaborate or Succinct? The Impact of AI Chatbots’ Language Style on Customers’ Satisfaction in Online Service

1
Department of Business Administration, School of Management, Minzu University of China, Beijing 100081, China
2
School of Journalism & Communication, Jinan University, Guangzhou 510632, China
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2026, 21(2), 51; https://doi.org/10.3390/jtaer21020051
Submission received: 16 October 2025 / Revised: 12 December 2025 / Accepted: 23 January 2026 / Published: 2 February 2026

Abstract

The growing prevalence of AI-powered chatbots in digital service environments has raised user expectations from mere functional efficiency to emotionally satisfying interactions. Drawing on Language Expectancy Theory (LET), this study investigates the impact of AI chatbot language style (namely, elaborate vs. succinct language) on customer service satisfaction. Across three studies, we demonstrate that customers exhibit higher satisfaction when interacting with chatbots employing elaborate language as opposed to succinct language. Furthermore, this effect is mediated by warmth and moderated by customer relationship norm orientation. The influence of elaborate language is more pronounced among customers with communal relationship norms, whereas those with exchange relationship norms respond more favorably to succinct language. Theoretically, this study enriches the literature on language style in human–computer interaction by introducing elaborateness as a pivotal communicative dimension. Practically, our results offer strategic guidance that can help service providers and developers to strategically tailor chatbot language styles to distinct customer segments, consequently enhancing service quality, fostering emotional engagement, and cultivating long-term customer loyalty within automated service systems.

1. Introduction

Artificial intelligence-powered chatbots are autonomous conversational agents designed to interact with customers, respond to inquiries, and deliver service support in digital environments [1,2]. For consumers, these systems provide real-time and round-the-clock assistance, streamline transaction processes, and offer personalized recommendations, thereby collectively enhancing overall service efficiency. For service providers, AI chatbots offer substantial cost advantages, making them an increasingly attractive solution for managing customer interactions at scale. As a result, the adoption of AI chatbots has rapidly expanded across online service platforms, including e-commerce customer service and digital travel agencies. Today, consumers expect not only accurate and efficient problem resolution but also meaningful and emotionally engaging interactions during service encounters [3]. In traditional service contexts, such emotional engagement is often shaped by the service provider’s language style, which plays a pivotal role in influencing customers’ evaluations of the overall experience [4]. As digital technologies increasingly mediate service interactions, this principle has begun to extend to human–AI communication. Language serves as the most prevalent medium in human–machine interactions, playing a critical role in shaping user satisfaction with service experiences [5]. Communicators’ language style subtly influences how message senders are perceived, prompting them to carefully craft messages aligned with social expectations [6].
Building on the communication and linguistics literature, recent studies have begun examining how language style shapes user experience. Language style refers to the manner in which individuals speak or write, specifically how they convey content carrying social meaning [7]. In customer-facing communication, organizations adopting language styles aligned with recipients’ expectations enhance communicative effectiveness and foster more favorable responses [8]. Given that AI chatbots are increasingly serving as the first point of contact between customers and organizations, similar principles of language style are expected to apply in these digital interactions. Consequently, a growing body of research is now examining how AI chatbot language style influences customer responses [9,10,11]. The language style employed by AI chatbots has been shown to significantly influence customer responses [12,13]. Prior studies have explored the effects of various stylistic dimensions, including formal versus informal language [10], abstract versus concrete language [11], and literal versus metaphorical expressions [14]. Building on this foundation, the present research focuses on another fundamental dimension: linguistic succinctness. According to the degree of conciseness, language can be categorized into two forms: elaborate language and succinct language [15,16]. Succinct language involves the use of concise vocabulary and short sentences, minimizing redundancy and conveying core information in a direct and efficient manner. It is intended to reduce cognitive load while maintaining informational completeness. In contrast, elaborate language emphasizes comprehensiveness and detail. It not only delivers key facts but also constructs a rich semantic network through layered explanations, descriptive modifiers, and logical elaboration.
Although language features such as message length [17], verbosity [18], and detail level [19] have been examined in the literature, an integrated understanding of how succinctness versus elaborateness functions as a coherent stylistic dimension in AI service conversations is lacking [20]. Three experiments in this study demonstrate that when AI chatbots communicate with customers, the use of elaborate language leads to higher levels of customer satisfaction compared to that resulting from succinct language. Furthermore, this effect is mediated by warmth and moderated by customer relationship norm orientation.
This study offers both theoretical and practical contributions. Theoretically, it enriches the existing literature on AI chatbots by introducing linguistic succinctness as a key dimension of language style, consequently broadening the conceptual scope of research in this domain. Furthermore, the findings extend the application of Language Expectancy Theory to the context of e-commerce and digital service interactions, demonstrating how deviations from expected linguistic norms influence customer evaluations. This study also advances our understanding of the boundary conditions under which chatbot communication affects service satisfaction by incorporating customer relationship norm orientation as a moderating factor. From a practical perspective, the findings offer actionable insights useful for managers seeking to optimize the communication styles of AI chatbots to enhance customer satisfaction and the overall service experience. Specifically, during the design and development phase, practitioners can tailor the chatbot’s language style (succinct or elaborate) based on the target customer segment’s relationship norm orientation and the nature of the firm–customer relationship. For instance, when serving long-term customers in a relational bond, practitioners should program their chatbot to adopt a more elaborate language style to convey warmth, whereas a succinct style may be more efficient and appreciated in purely transactional interactions.

2. Theoretical Background and Hypothesis Development

2.1. Language Style in Communication

Language plays a vital role in interpersonal and commercial communication [21]. Small changes in language can have a large impact. Language characteristics (e.g., tense, length, intensity, word choice, and language style) have an impact on communication outcomes. Different linguistic choices, such as word selection or tone, can significantly influence message reception. For example, using present-tense verbs in communication (vs. past- or future-tense) leads to message recipients perceiving the information as more helpful and persuasive [22]. Language style refers to the way people speak or write—that is, how they convey content that carries social meaning [7]. Different language styles significantly influence customer perceptions. In communication with customers, firms that adopt appropriate language styles can meet recipients’ linguistic expectations, thereby enhancing communication effectiveness. For instance, using concrete, accurate, and interactive language styles increases the success rate of crowdfunding projects [23]. Also, when consumers are familiar with a brand, the brand’s use of informal (vs. formal) language in communication enhances trust in that brand [24]. Furthermore, as Packard and Berger [8] found, compared to implicit language (e.g., “I like it”), explicit expressions of endorsement (e.g., “I recommend it”) lead recipients to perceive the sender as holding a more extreme positive attitude toward the product and having greater product expertise, thus increasing purchase intention. Packard et al. [5] showed that customers are more satisfied when employees use both cognitive and affective language—but at separate, specific times.

2.2. The Role of Language in AI-Mediated Service Encounters

In service encounters, the language employed by service providers functions as a strategic signal. For example, when employees use singular first-person pronouns, customer satisfaction and purchasing are boosted because these pronouns signal that the employee is emotionally and behaviorally present [25]. Polite language systematically affects recipients’ perceptions of psychological distance; that is, recipients interpret polite language as a signal that the communicator is deliberately maintaining social distance, implying a relatively distant relationship with lower intimacy [26].
During AI-mediated service encounters, the text produced by chatbots serves as the principal medium through which service value is communicated [27]. In the online service context, consumers perceive, encode, and understand the text produced by a chatbot to form judgments about service quality [28]. From the perspective of signaling theory [29,30], customers process the language output of a chatbot to make judgments about the underlying qualities of the service provider (e.g., warmth and attentiveness). Thus, in online interactions, chatbot language serves not only to answer customer queries but also to signal the care and commitment of the service provider.

2.3. Language Style of AI Chatbots

Chatbots are system-based machines capable of interacting with customers, communicating, and providing services. These chatbots can be virtual (such as online chatbots) or physical devices [31]. AI chatbots, powered by artificial intelligence technologies, are capable of interacting with customers, engaging in dialogue, and delivering services [1,2]. The language style used by an AI chatbot is a critical factor in human–machine interactions. This area has received a considerable amount of attention in the literature. Recent studies on chatbot language style are summarized in Table 1. The literature categorizes the language styles of AI chatbots into various types, such as concrete versus abstract [9,11], literal versus figurative [14], and formal versus informal [10]. These different language styles have varying impacts on customer service satisfaction and recovery satisfaction. For instance, Zhu et al. [11] found that when chatbots respond to service inquiries, using a concrete language style (as opposed to an abstract one) leads to higher customer satisfaction. This is because concrete language makes users feel like the robot understands their needs accurately, while detailed responses enhance the perception of being listened to and increase the credibility of solutions. Chauhan and Mehra [32] demonstrated that in service failure situations, online travel agency chatbots using a concrete language style were more likely to receive customer forgiveness, as they convey clearer and more responsible signals. Li and Wang [10] further found that when chatbots adopt an informal (as opposed to formal) language style, customers’ intentions to continue using a service and brand attitudes improve.
Building on prior work, this paper examines the use of succinct versus elaborate language style, a practical and fundamental design choice for chatbot development. Recently, Yu et al. [20] examined the effect of recommendation conciseness on customers’ attitudes toward AI chatbots in social consumption contexts (e.g., restaurant recommendations). Our work shifts the focus to the e-commerce customer service context, aiming to advance our nuanced understanding of how chatbot language style shapes online user experience.

2.4. Succinct and Elaborate Language Style

Succinct and elaborate language styles are classified based on the degree of conciseness in communication. While there have been studies on linguistic characteristics like length [17], verbosity [18], and level of detail [19], research on AI-mediated conversational language conciseness in service interaction remains limited [20]. Synthesizing discrete linguistic features (e.g., length and details) into the higher-order constructs of succinct and elaborate styles provides a holistic framework for examining the role of language style in service interactions. Specifically, succinct language refers to a communication style that consists of using concise/short words and sentences, eliminating redundant information to convey core content in the most direct and clear manner. Elaborate language is a communication style characterized by comprehensiveness, detail, and depth, emphasizing full coverage of information and clear explanation of details [15]. Succinct language is relatively brief and unembellished, employing straightforward expressions, whereas elaborate language involves the use of numerous distinctions, complex linguistic forms, and indirect phrasing.
Succinct and elaborate styles result in different social perceptions and degrees of communication effectiveness. For example, in high-uncertainty-avoidance cultures (e.g., the Netherlands), succinct language is more persuasive than elaborate language, while in low-uncertainty-avoidance cultures (e.g., the UK), elaborate language is more persuasive than succinct language [15]. In addition, research shows that using abbreviated language triggers perceptions of insincerity and lower effort on the part of the message sender, leading to a reduced willingness to respond. In contrast, complete language without abbreviations leads recipients to perceive that the sender has expended greater effort and is more sincere [6]. Providing detailed explanations in communication can convey supportiveness to the recipient. Providing longer messages with more details effectively enhances perceived supportiveness and persuasiveness [15]; in contrast, overly succinct expressions may weaken these positive perceptions [43,44]. For instance, Sevier et al. [45] suggest that elaborating design concepts helps designers better communicate their ideas and improves stakeholders’ understanding and the comprehensibility of proposals. In contrast, when content is presented concisely, recipients often need to fill in informational gaps themselves, increasing cognitive load and potentially hindering comprehension.
Based on the existing literature, in this study, we posit that when AI chatbots communicate with customers, elaborate language leads to better communication outcomes than succinct language. When AI chatbots use elaborate language, they provide customers with more complete and clear service guidance, reduce the difficulty of information decoding, and lower cognitive load. At the same time, greater linguistic quantity conveys a stronger attitude of caring for the customer, leading consumers to perceive the service provider as more sincere, subsequently increasing service satisfaction. In contrast, although succinct language can quickly and accurately convey core information, it may fail to meet customers’ expectations for in-depth service. In interpersonal communication, when language use exceeds expectations, it often enhances message credibility and persuasiveness; when language use falls short of expected communication standards, persuasiveness is weakened [46]. In interactions between AI chatbots and customers, using succinct language allows the delivery of basic, accurate, and complete information, which may only meet customers’ baseline expectations for communication. In contrast, using elaborate language may exceed these expectations and is therefore more likely to produce positive communication outcomes. Therefore, the following hypothesis is proposed.
H1. 
AI chatbots using elaborate language, as opposed to succinct language, will lead to higher customer service satisfaction.

2.5. The Mediating Role of Warmth

According to the stereotype content model, warmth and competence are two fundamental dimensions through which individuals evaluate others [47,48]. When non-human entities exhibit human-like characteristics, individuals also evaluate them according to these two dimensions (warmth and competence) [49]. Warmth can influence individuals’ emotions and shape their behaviors [50]. Packard et al. [25] found that greater warmth provokes customers to feel more cared for and understood, helping enhance customer satisfaction. Given the critical role of warmth in shaping interpersonal evaluations, its importance may extend to human–AI interactions as well. With the rapid advancement of artificial intelligence technologies, the gap between AI service agents and human service providers in terms of perceived competence has been narrowing, especially in regard to delivering objective advice and information [51]. This convergence in perceived competence may result in consumers relying more on non-competence-related attributes, such as emotional resonance and social warmth, when evaluating AI service performance.
According to Language Expectancy Theory, people have expectations about appropriate communication styles, and these expectations influence how individuals perceive messages, thereby affecting recipients’ attitudes [52]. Warmth is a fundamental dimension of social evaluation directly tied to users’ socio-emotional expectations regarding communication [39]. A language style that violates (or confirms) these expectations may affect customers’ perceptions of warmth. Based on Brown’s politeness principle, language use must balance clarity of expression with the need to maintain others’ face [53]. As discussed above, succinct language may appear to be overly direct and reflect failure to sufficiently acknowledge the recipient’s feelings, resulting in a perceived lack of warmth. In contrast, elaborate language, by providing more details, can convey care and consideration, fulfilling psychological needs. Novak et al. [54] found that expressions containing contextualized details positively enhance the interaction partner’s emotional experience. Such positive emotional experience in interpersonal communication often stems from the perception of warmth conveyed by detailed, considerate expressions. Extending this logic to human–machine interactions, higher levels of warmth are associated with customers’ positive responses to AI chatbots’ elaborate language. Customers’ perceptions of warmth toward AI chatbots can increase service satisfaction [55]. For instance, in a study on chatbots’ use of emojis, Yu and Zhao [56] found that consumers perceive service agents who use emojis as warmer and report higher satisfaction with their service, and this effect is stronger when chatbots serve hedonically motivated consumers. Therefore, warm robots provide a more comfortable and enjoyable interaction experience, satisfying people’s intrinsic desire for friendly relationships. Accordingly, the following hypothesis is proposed.
H2. 
Warmth mediates the effect of AI chatbot language style on customer service satisfaction. Specifically, compared to succinct language, elaborate language increases service satisfaction by enhancing warmth.

2.6. The Moderating Role of Relationship Norm Orientation

Relationship norm orientation refers to the type of relationship an individual expects to establish during interactions with others [57]. According to the norms governing the provision and reception of benefits in relationships, relationship norm orientation can be categorized into communal relationships and exchange relationships [57]. In exchange relationships, the motivation behind helping others is to receive reciprocal returns; individuals expect compensation for their help. In contrast, in communal relationships, helping others is motivated by care and concern, and individuals are more oriented toward emotional rewards. In service evaluations, some customers prioritize efficiently and accurately completing tasks, while others place greater importance on establishing positive and harmonious interpersonal relationships [58,59,60].
Individual differences in relationship norm orientation play a critical role in shaping service evaluations [60,61,62]. For example, Han et al. [61] found that for consumers with a communal orientation, AI chatbots’ expression of positive emotions increases their service satisfaction. For consumers with an exchange orientation, however, this effect is nonexistent or even negative. This is because exchange-oriented customers expect AI chatbots to solve problems quickly; if a chatbot appears overly enthusiastic, they perceive it as unprofessional, and this violation of expectations outweighs any positive effects generated by emotional contagion, so there is no improvement in satisfaction. In contrast, communally oriented customers desire warmth and care in service interactions. For them, a chatbot’s expression of positive emotions conveys care and aligns with their expectations, thereby improving their moods and increasing satisfaction.
Role theory suggests that satisfaction increases when customers perceive that service providers behave in ways consistent with the role norms expected in a given service context [63]. Conversely, service behaviors that deviate from these role expectations are likely to cause dissatisfaction. In online service settings, AI chatbots assume the social role of human service agents. According to the core principle of LET, consumers form expectations about chatbot behaviors based on their own relationship norm orientation and evaluate a chatbot by comparing its actual behavior against these expectations. A language style that violates (or confirms) these expectations may affect customers’ subsequent evaluations. For exchange-oriented customers, time cost and information efficiency in service interactions are critical. Succinct and clear expressions help them make decisions quickly. For communally oriented customers, however, the brevity and directness of succinct language may make interactions seem cold and mechanical, conditions that are unfavorable for creating a warm and harmonious atmosphere, leading to lower satisfaction with succinct language. When AI chatbots use elaborate language, they appear more thorough and attentive, but this may contradict the efficiency-seeking and cost-conscious expectations of exchange-oriented consumers. In contrast, for communally oriented consumers, elaborate language better conveys a chatbot’s understanding and care, making them feel valued and respected. Furthermore, using elaborate language contributes to maintaining relational harmony [16], thus making customers’ experiences more positive during service interactions.
Consequently, the following hypothesis is proposed.
H3. 
Relationship norm orientation moderates the effect of AI chatbot language style on customer service satisfaction. Specifically, exchange-oriented customers exhibit higher service satisfaction when AI chatbots use succinct language (as opposed to elaborate language); communally oriented customers exhibit higher service satisfaction when AI chatbots use elaborate (as opposed to succinct) language.
A conceptual model of this hypothesis is given in Figure 1.

3. Methods

We tested our hypotheses across three experimental studies. In Study 1, we examined the effect of language style (succinct vs. elaborate) of AI chatbots on service satisfaction (H1). Study 2 replicated the results of Study 1 and tested the mediating role of warmth (H2). Study 3 examined the moderating role of relationship norm orientation (exchange vs. communal). To ensure validity in these studies, we used different stimulus materials in pretests. The participant demographics for these studies are listed in Table 2.

3.1. Study 1

The purpose of Study 1 was to examine the effect of AI chatbot language style (succinct vs. elaborate) on customer service evaluation. A 2 (chatbot language style: succinct vs. elaborate) between-subjects design was employed.

3.1.1. Participants and Experimental Stimuli

We used G*Power 3.1.9.7 to calculate the minimum total sample size, which indicated that N = 128 was required to achieve the desired statistical power of 0.80 (d = 0.50, α = 0.05). A total of 240 participants were recruited via an online survey platform. After 10 participants who failed the attention check item were excluded, 230 participants were included in the analysis (69.6% females; Mage = 30.46), yielding a valid response rate of 95.83%. This final sample size exceeds the required minimum, ensuring this study has adequate statistical power.
The experimental materials (see Appendix A) were adapted from those used in prior research on chatbot language style [40,42]. The results of pretests and post-tests indicated that the participants in the succinct language condition perceived the language to be more succinct (Msuccinct = 6.14, SD = 1.07, Melaborate = 4.64, SD = 1.54; t(99) = 5.67, p < 0.001), while the participants in the elaborate language condition perceived the language to be more elaborate (Msuccinct = 4.54, SD = 1.39, Melaborate = 5.74, SD = 0.75; t(99) = −5.38, p < 0.001). No significant differences were found between the two conditions in terms of politeness (Msuccinct = 5.98, SD = 0.87, Melaborate = 6.16, SD = 0.71; t(99) = −1.13, p = 0.26), realism (Msuccinct = 6.3, SD = 0.65, Melaborate = 6.26, SD = 0.66; t(99) = 0.31, p = 0.76), wordiness (Msuccinct = 2.12, SD = 0.72, Melaborate = 2.26, SD = 0.69; t(99) = −0.99, p = 0.32) and formality (Msuccinct = 5.84, SD = 0.84, Melaborate = 5.68, SD = 1.10; t(99) = 0.82, p = 0.42).

3.1.2. Procedure

The participants were informed that they were taking part in a market research study on AI chatbots, and they were randomly assigned to either the succinct or elaborate language conditions. All the participants were instructed to imagine that they were shopping for clothing at an online store that used an AI chatbot named “Little Smart,” which could answer their questions. The participants were then presented with an image showing a conversation between Little Smart and a customer. In the succinct language condition, Little Smart responded using succinct language (e.g., “Hello, dear customer, the white version will be back in stock soon. Please stay tuned.”). In the elaborate language condition, Little Smart used more detailed language (e.g., “Hello, dear customer, the white version will be restocked soon. To ensure you can purchase it in time, we recommend closely monitoring updates on the product page”; see Appendix A). After reading the conversation, the participants completed manipulation check items: “To what extent do you feel that Little Smart used succinct language?” and “To what extent do you feel that Little Smart used elaborate language?” (1 = strongly disagree, 7 = strongly agree). Next, the participants responded to service satisfaction measures: “Little Smart’s service was good,” “Little Smart’s service met my expectations,” and “I was satisfied with Little Smart’s service” (1 = strongly disagree, 7 = strongly agree; α = 0.71) [64]. Finally, the participants provided demographic information.

3.1.3. Results

Manipulation Check: The results of one-way ANOVA showed that the participants in the succinct language condition perceived the chatbot’s language to be more succinct relative to those in the elaborate language condition (Msuccinct = 5.90, SD = 1.03, vs. Melaborate = 4.87, SD = 1.45, F(1, 228) = 38.67, p < 0.05, η2 = 0.145). Conversely, the participants in the elaborate language condition perceived the chatbot’s language as more elaborate than those in the succinct language condition (Msuccinct = 4.70, SD = 1.23, Melaborate = 5.76, SD = 0.72, F(1, 228) = 59.13, p < 0.001, η2 = 0.206). These results confirm that the manipulation of language style was successful.
Service Satisfaction: The results of one-way ANOVA revealed that the participants in the elaborate language condition reported higher service satisfaction with the AI chatbot than those in the succinct language condition (Msuccinct = 5.79, SD = 0.72, Melaborate = 6.10, SD = 0.48, F(1, 228) = 14.47, p < 0.001, η2 = 0.060). These results are consistent with H1.
Study 1 provided direct evidence for our main hypothesis: AI chatbots using elaborate language (as opposed to succinct language) will lead to higher customer service satisfaction. However, the psychological mechanism underlying the effect remains unrevealed. Study 2 will further address this unanswered question.

3.2. Study 2

Study 2 was designed to replicate the findings of Study 1 and test the mediating effect of warmth. A 2 (chatbot language style: succinct vs. elaborate) between-subjects design was employed. A total of 200 undergraduate students were recruited from a university in Beijing. This population was selected for two key reasons. First, undergraduates tend to be relatively homogeneous in age, educational background, and life experience, helping to minimize confounding variables and strengthen internal validity. Second, university students are generally familiar with AI chatbots, allowing them to engage more naturally with the experimental materials and respond in a contextually informed manner. New measures assessing warmth and emotional response were added.

3.2.1. Participants and Experimental Stimuli

We used G*Power to calculate a minimum total sample size, which indicated that N = 128 was required to achieve the desired statistical power of 0.80 (d = 0.50, α = 0.05). We recruited a total of 200 undergraduate students from a university in Beijing (82.5% females, Mage = 20.15). All the participants passed the attention check, and no participants were excluded. This sample size exceeds the required minimum, ensuring that this study has adequate statistical power. The stimuli in Study 2 were identical to those in Study 1.

3.2.2. Procedure

The procedure was similar to that used in Study 1. The participants were informed that they were taking part in a market research study on AI chatbots and were randomly assigned to be shown either the succinct language condition or the elaborate language. After reading the conversation between the customer and the chatbot, the participants completed the manipulation check items. Next, the participants responded to items measuring warmth: “I think this chatbot is warm,” “I think this chatbot is kind,” and “I think this chatbot is friendly” (α = 0.84) [65]. Subsequently, the participants answered the service satisfaction items (1 = strongly disagree, 7 = strongly agree; α = 0.79) [64]. To examine the role of mood, we asked the participants to rate their emotional states using a semantic differential scale with four bipolar adjective pairs: “sad–happy”, “bad mood–good mood”, “angry–pleased”, and “hopeless–energized” (7-point scale; α = 0.90) [66]. Finally, the participants answered demographic-information-related questions.

3.2.3. Results

Manipulation Check: The one-way ANOVA results showed that participants in the succinct language condition perceived the chatbot’s language to be more succinct relative to those in the elaborate language condition (Msuccinct = 5.37, SD = 1.10, vs. Melaborate = 4.94, SD = 1.36, F(1, 198) = 6.042, p < 0.05, η2 = 0.029). Conversely, the participants in the elaborate language condition perceived the chatbot’s language as more elaborate relative to those in the succinct language condition (Msuccinct = 5.21, SD = 1.10, Melaborate = 5.69, SD = 0.92, F(1, 198) = 11.18, p < 0.001, η2 = 0.053). These results confirm that the manipulation of language style was successful.
Warmth: The one-way ANOVA results indicated that the participants in the elaborate language condition reported higher levels of warmth toward the AI chatbot than those in the succinct language condition (Msuccinct = 5.28, SD = 0.96, Melaborate = 5.55, SD = 0.90, F(1, 198) = 4.09, p < 0.05, η2 = 0.020).
Mood: The one-way ANOVA results showed there were no significant differences between the two conditions in terms of positive and negative emotions (Msuccinct = 4.87, SD = 1.10, Melaborate = 5.04, SD = 1.18, F(1, 198) = 1.02, p > 0.05, η2 = 0.005). This finding suggests that the participants’ emotional states did not confound the experimental results.
Service Satisfaction: The one-way ANOVA results revealed that the participants in the elaborate language condition reported higher service satisfaction with Little Smart than those in the succinct language condition (Msuccinct = 5.19, SD = 0.93, Melaborate = 5.53, SD = 0.77, F(1, 198) = 7.97, p < 0.01, η2 = 0.039). Additionally, when age, gender, and education level were included as covariates in the ANOVA model, the effect of language style on service satisfaction remained significant.
Mediation Analysis: We therefore conducted a mediation analysis using bootstrapping (with 5000 resamples, PROCESS model 4) [67] with language style as the independent variable (0 = succinct language, 1 = elaborate language), warmth as the mediator, and service satisfaction as the dependent variable. The results of this analysis validated our proposed model, showing a significant indirect effect (indirect effect = 0.15, SE = 0.078, 95% CI [0.0038, 0.3128]; see Figure 2). Thus, H2 is supported.
The results of Study 2 provided convergent evidence for our findings in Study 1, excluding mood as a potential alternative explanation. In addition, warmth was found to mediate the causal relationship between the language style of the AI chatbot and service satisfaction. In the next step, our objective was to identify a moderator, specifically, relationship norm orientation.

3.3. Study 3

Study 3 used a health examination center consultation scenario to examine whether H1 would hold in a different service setting. To test the moderating role of customer relationship norm orientation (H3), in Study 3, we measured participants’ relationship norm orientation. A 2 (AI chatbot language style: succinct vs. elaborate) between-subjects experimental design was employed.

3.3.1. Participants and Experimental Stimuli

G*Power analysis indicated that a minimum total sample size of N = 199 (d = 0.04, α = 0.05) was required to ensure the study would have the desired power, 0.80. A total of 360 participants were recruited via an online survey platform. After 16 participants who failed the attention check item were excluded, 344 participants were included in the analysis (68% females; Mage = 30.02), yielding a valid response rate of 95.56%. This final sample size exceeds the required minimum, ensuring adequate statistical power.
The experimental materials in Study 3 were designed based on those used in prior research [40,42]. The results of pretests and posttests indicated that the participants in the succinct language condition perceived the language more succinct (Msuccinct = 5.56, SD = 1.15, Melaborate = 4.60, SD = 1.16; t(99) = 5.67, p < 0.001), while the participants in the elaborate language condition perceived the language to be more elaborate (Msuccinct = 5.32, SD = 0.77, Melaborate = 5.88, SD = 0.69; t(99) = −3.84, p < 0.001). No significant differences were found between the two conditions in terms of politeness (Msuccinct = 6.16, SD = 0.82, Melaborate = 6.30, SD = 0.74; t(99) = −0.90, p = 0.37), realism (Msuccinct = 6.14, SD = 0.70, Melaborate = 6.06, SD = 0.77; t(99) = 0.55, p = 0.59), naturalness (Msuccinct = 5.56, SD = 0.93, Melaborate = 5.82, SD = 0.96; t(99) = −1.37, p = 0.17), wordiness (Msuccinct = 2.64, SD = 0.60, Melaborate = 2.82, SD = 0.75; t(99) = 0.39, p = 0.19) and formality (Msuccinct = 5.84, SD = 0.98, Melaborate = 5.60, SD = 1.05; t(99) = 1.18, p = 0.24).

3.3.2. Procedure

First, the participants completed a scale measuring relationship norm orientation (9 items, α = 0.72; 7-point scale: 1 = strongly disagree, and 7 = strongly agree) [68,69], where higher scores indicate a stronger tendency toward a communal relationship orientation, whereas lower scores reflect a greater inclination toward an exchange relationship orientation. Next, the participants were informed that they were taking part in a market research study on AI chatbots and were randomly assigned to either the succinct or elaborate language condition. They were instructed to imagine that they were preparing to visit a health examination center for a medical check-up and intended to ask the customer service team some questions online beforehand. The center used an AI chatbot named “Little Smart” for customer service. The participants then read descriptions of their interactions with Little Smart. In the succinct language condition, the participants read a dialogue in which Little Smart used a succinct language style (e.g., “Hello, dear customer. I am Little Smart. How can I assist you?”). In the elaborate language condition, the participants read a dialogue in which Little Smart used a more detailed language style (e.g., “Hello, dear customer. I am Little Smart. If you have any needs, please feel free to let me know. How may I assist you?”; see Appendix A). After reading the conversation, the participants completed the manipulation check items. Next, the participants responded to the service satisfaction items (1 = strongly disagree, 7 = strongly agree; α = 0.78) [64]. Finally, the participants provided demographic information.

3.3.3. Results

Manipulation Check: The one-way ANOVA results showed that the participants in the succinct language condition perceived the AI chatbot’s language as more succinct than those in the elaborate language condition (Msuccinct = 5.75, SD = 1.05, Melaborate = 5.25, SD = 1.01, F(1, 342) = 20.29, p < 0.001, η2 = 0.056). Conversely, the participants in the elaborate language condition perceived the chatbot’s language as more elaborate than those in the succinct language condition (Msuccinct = 5.60, SD = 1.0, Melaborate = 5.92, SD = 0.83, F(1, 342) = 11.04, p < 0.001, η2 = 0.031). These results confirm that the manipulation of language style was successful.
Main Effect: The one-way ANOVA results revealed that the participants in the elaborate language condition reported higher service satisfaction with Little Smart than those in the succinct language condition (Msuccinct = 5.67, SD = 0.90, Melaborate = 5.87, SD = 0.71, F(1, 342) = 5.34, p < 0.05, η2 = 0.015). Additionally, when age, gender, and education level were included as covariates in the model, the effect of language style on service satisfaction remained significant.
Moderation Analysis: We used floodlight analysis to analyze the interaction effect of significant relationship norm orientation and chatbot language style. Model 1 in Hayes’ PROCESS procedure [67] was used for this analysis. Hayes’ PROCESS procedure with the Johnson–Neyman technique can help illuminate the entire range of the continuous moderating variable, relationship norm orientation, to clearly highlight the areas of significance. The regression results revealed a significant interaction effect (t = 2.85, p = 0.0047) of chatbot language style (0 = succinct, 1 = elaborate) and relationship norm orientation (higher scores indicate a communal orientation; lower scores indicate an exchange orientation). Table 3 presents the regression output.
The Johnson-Neyman technique can identify the specific score on the continuous scale at which the effect becomes significant, which provides greater precision than a simple median split by defining the exact region of significance. Using floodlight analysis following the Johnson–Neyman technique, areas of significance for the simple effect of chatbot language style along the axis of the relationship norm orientation variable were identified. The results of this analysis revealed two Johnson–Neyman points. The first point is relationship norm orientation, 3.0704. As demonstrated in Figure 3, when using a succinct language style, the AI chatbot significantly enhanced the service satisfaction of participants who scored below 3.0704 on the relationship norm orientation measure (i.e., those who are exchange-oriented). The other point is relationship norm orientation, 4.4037: for participants who scored above 4.4037 on the relationship norm orientation measure (i.e., those who are communally orientated), when the AI chatbot used an elaborate language style, satisfaction with the service was significantly higher. The results of Study 3 support H3, confirming that relationship norm orientation plays a moderating role.
Study 3 confirmed that H1 also holds in a different service setting, thereby enhancing the generalizability and robustness of the overall findings. In addition, the results of Study 3 supported the moderating effect of relationship norm orientation, consistent with H3. Specifically, exchange-oriented customers preferred succinct language, whereas communally oriented customers responded more favorably to elaborate language.

4. Conclusions and Discussion

With the advancement of AI technology, AI chatbots are increasingly being deployed in online service contexts. Grounded in Language Expectancy Theory, this study examines the impact of AI chatbot language style (succinct vs. elaborate) on customer service satisfaction while also investigating the underlying psychological mechanism and boundary conditions of this effect. Through three experiments, we tested the proposed hypotheses. The results of Study 1 indicated that customers evaluate service more favorably when an AI chatbot uses an elaborate language style as opposed to a succinct one. Study 2 revealed that warmth mediates this effect. Furthermore, Study 3 demonstrated that the influence of AI chatbots’ language style on service satisfaction is moderated by customer relationship norm orientation: when customers are communally orientated, they exhibit higher satisfaction with AI chatbots using elaborate (vs. succinct) language; in contrast, when customers are exchange-oriented, they report higher satisfaction with chatbots using succinct (vs. elaborate) language.

4.1. Theoretical Implications

First, this study introduces elaborate language and succinct language as a stylistic dimension in the field of AI chatbot research, moving beyond the traditional focus in human–computer interaction (HCI) research on non-linguistic elements such as anthropomorphism and emotional expression. Although prior research categorized language styles in various dimensions—such as concrete versus abstract, formal versus informal, emotional versus rational, and implicit versus explicit—and examined their effects on consumer satisfaction, purchase intentions, and brand evaluations, there remains a notable gap in research on the distinction between succinct and elaborate language. This study addresses this gap, enriching the literature on language style in the context of service satisfaction and deepening our understanding of AI chatbots in service interactions.
Second, this study uncovers the psychological mechanisms and boundary conditions underlying the impact of AI chatbot language style on service satisfaction. By demonstrating the mediating role of warmth, this research sheds light on customers’ psychological responses during human–machine interactions. Furthermore, by examining the moderating effect of customer relationship norm orientation, this study extends the boundary conditions of consumer satisfaction in human–computer interaction contexts from the perspective of individual differences. This work contributes to a more nuanced understanding of how personal traits shape user experiences with AI-driven service agents.
Finally, this study was grounded in Language Expectancy Theory to explain how AI chatbot language style influences service satisfaction, thereby extending the application of this theory to the domain of AI-based marketing and service research. Our findings illustrate that customers’ expectations about appropriate communication styles are activated even in interactions with non-human agents, and that deviations from or alignment with these expectations influence satisfaction judgments. This evidence confirms that the fundamental mechanism of Language Expectancy Theory is applicable and verifiable within the domain of human–machine interaction.

4.2. Practical Implications

This study offers strategic guidance and an empirical foundation for both technology developers and service providers regarding the design of AI chatbot language styles. Our findings demonstrate that language is not merely a functional tool for efficiency but rather serves as a key social design variable. We provide empirical evidence that elaborate language may help cultivate a sense of warmth in customers’ minds, providing a mechanism for service differentiation in homogenized digital service markets. We therefore suggest that developers consider language style as a strategic design feature during the development and customization of AI chatbots, ensuring initial configurations align with the goal of fostering positive customer perceptions.
Furthermore, this research provides strategic guidance that can help service providers to move beyond a one-size-fits-all approach. Our results establish that customer relationship norm orientation (exchange vs. communal) serves as a valuable theoretical criterion for market segmentation. This finding provides a foundation for personalized strategies wherein companies could leverage customer data (such as purchase history or behavioral indicators) to theoretically classify users’ orientations. Based on this theoretical classification, AI chatbots could be designed to dynamically adapt their language styles—for example, using a succinct style for exchange-oriented customers and a more elaborate style for communal-oriented customers. This personalized approach is theoretically crucial for enabling more intelligent, empathetic, and efficient service delivery, and it is expected to enhance service quality, customer satisfaction, and long-term loyalty.

4.3. Limitations

Despite its theoretical and practical contributions, this study is not without limitations, which offer valuable directions for future research. First, the experiments relied on hypothetical online service scenarios, which, although carefully designed and pretested, may not fully capture the complexity of real-world customer–chatbot interactions. In future studies, researchers could employ field experiments or behavioral data from actual service platforms to validate the ecological validity of the findings. Second, while this study focused on language style (succinct vs. elaborate) as the primary linguistic dimension, chatbots’ communication involves multiple stylistic cues—such as tone, emotional expression, and nonverbal features (e.g., emojis, avatars, and voice). In future studies, researchers could adopt a multimodal approach to examine how these linguistic and paralinguistic elements jointly shape customer perceptions and satisfaction. Third, this study did not consider other boundary conditions that may moderate the observed main effect. Future research could examine such psychological and situational moderators to provide a more comprehensive understanding of when, and for whom, language style matters in AI-mediated services. Finally, our samples were drawn from a highly contextual communication culture (China), where elaborate language is culturally favored for building relationships. This cultural context may heighten the observed preference for elaborate language. Future research could examine the generalizability of our research findings in other cultures and contexts.

Author Contributions

Conceptualization, Y.F.; methodology, Y.F. and X.Y.; validation, Y.F. and X.Z.; formal analysis, X.Y.; investigation, X.Y. and L.Z.; data curation, Y.F.; writing—original draft preparation, Y.F. and X.Y.; writing—review and editing, X.Z.; visualization, X.Y.; supervision, X.Z.; project administration, Y.F.; funding acquisition, Y.F. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 72102250), MUC Young Faculty Research Capability Advancement Initiative (No. 2023QNTS27), the Guangdong University Young Innovative Talents Program Project (No. 2024WQNCX004), the Guangdong Education Science Planning Project (No. 2024GXJK215), and the Science and Technology Projects in Guangzhou (No. 2025A04J3589).

Institutional Review Board Statement

Ethical review and approval were waived for this study because all of experiments involved no personally identifiable or sensitive information, and posed no more than minimal risk to participants.

Informed Consent Statement

Informed consent for participation was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author due to ethical reasons.

Acknowledgments

During the preparation of this manuscript, the authors used ChatGPT (GPT-5.2) for the purposes of checking for grammatical errors. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1. Succinct language style condition in Study 1 and Study 2.
Figure A1. Succinct language style condition in Study 1 and Study 2.
Jtaer 21 00051 g0a1
Figure A2. Elaborate language style condition in Study 1 and Study 2.
Figure A2. Elaborate language style condition in Study 1 and Study 2.
Jtaer 21 00051 g0a2
Figure A3. Succinct language style condition in Study 3.
Figure A3. Succinct language style condition in Study 3.
Jtaer 21 00051 g0a3
Figure A4. Elaborate language style condition in Study 3.
Figure A4. Elaborate language style condition in Study 3.
Jtaer 21 00051 g0a4

References

  1. Choi, S.; Mattila, A.S.; Bolton, L.E. To err is human(-oid): How do consumers react to robot service failure and recovery? J. Serv. Res. 2021, 24, 354–371. [Google Scholar] [CrossRef]
  2. Jha, S.; Gupta, S.; Mahajan, R. The effect of motivated consumer innovativeness on the intention to use chatbots in the travel and tourism sector. Asia Pac. J. Tour. Res. 2023, 28, 729–744. [Google Scholar] [CrossRef]
  3. Huang, M.H.; Rust, R.T. The caring machine: Feeling AI for customer care. J. Mark. 2024, 88, 1–23. [Google Scholar] [CrossRef]
  4. Holmqvist, J.; Van Vaerenbergh, Y.; Grönroos, C. Language use in services: Recent advances and directions for future research. J. Bus. Res. 2017, 72, 114–118. [Google Scholar] [CrossRef]
  5. Packard, G.; Li, Y.; Berger, J. When language matters. J. Consum. Res. 2024, 51, 634–653. [Google Scholar] [CrossRef]
  6. Fang, D.; Zhang, Y.E.; Maglio, S.J. Shortcuts to insincerity: Texting abbreviations seem insincere and not worth answering. J. Exp. Psychol. Gen. 2025, 154, 39–57. [Google Scholar] [CrossRef]
  7. Coupland, N. Style: Language Variation and Identity; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  8. Packard, G.; Berger, J. How language shapes word of mouth’s impact. J. Mark. Res. 2017, 54, 572–588. [Google Scholar] [CrossRef]
  9. De Angelis, M.; Tassiello, V.; Amatulli, C.; Costabile, M. How language abstractness affects service referral persuasiveness. J. Bus. Res. 2017, 72, 119–126. [Google Scholar] [CrossRef]
  10. Li, M.; Wang, R. Chatbots in e-commerce: The effect of chatbot language style on customers’ continuance usage intention and attitude toward brand. J. Retail. Consum. Serv. 2023, 71, 103209. [Google Scholar] [CrossRef]
  11. Zhu, Y.; Zhang, J.; Liang, J. Concrete or abstract: How chatbot response styles influence customer satisfaction. Electron. Commer. Res. Appl. 2023, 62, 101317. [Google Scholar] [CrossRef]
  12. Kayeser Fatima, J.; Khan, M.I.; Bahmannia, S.; Chatrath, S.K.; Dale, N.F.; Johns, R. Rapport with a chatbot? The underlying role of anthropomorphism in socio-cognitive perceptions of rapport and e-word of mouth. J. Retail. Consum. Serv. 2024, 77, 103666. [Google Scholar] [CrossRef]
  13. Pfeiffer, B.E.; Sundar, A.; Cao, E. The influence of language style (formal vs. colloquial) on the effectiveness of charitable appeals. Psychol. Mark. 2023, 40, 542–553. [Google Scholar] [CrossRef]
  14. Choi, S.; Liu, S.Q.; Mattila, A.S. “How may I help you?” Says a robot: Examining language styles in the service encounter. Int. J. Hosp. Manag. 2019, 82, 32–38. [Google Scholar] [CrossRef]
  15. Hendriks, B.; van Meurs, F.; Korzilius, H.; le Pair, R.; le Blanc-Damen, S. Style congruency and persuasion: A cross-cultural study into the influence of differences in style dimensions on the persuasiveness of business newsletters in Great Britain and the Netherlands. IEEE Trans. Prof. Commun. 2012, 55, 122–141. [Google Scholar] [CrossRef]
  16. Mulac, A.; Bradac, J.J.; Gibbons, P. Empirical support for the gender-as-culture hypothesis: An intercultural analysis of male/female language differences. Hum. Commun. Res. 2001, 27, 121–152. [Google Scholar] [CrossRef]
  17. Lutz, B.; Pröllochs, N.; Neumann, D. Are longer reviews always more helpful? Disentangling the interplay between review length and line of argumentation. J. Bus. Res. 2022, 144, 888–901. [Google Scholar] [CrossRef]
  18. Davis, S.W.; Horváth, C.; Gretry, A.; Belei, N. Say what? How the interplay of tweet readability and brand hedonism affects consumer engagement. J. Bus. Res. 2019, 100, 150–164. [Google Scholar] [CrossRef]
  19. Kim, J.; Giroux, M.; Lee, J.C. When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychol. Mark. 2021, 38, 1140–1155. [Google Scholar] [CrossRef]
  20. Yu, X.; Li, Y.; Huang, H.; Bogicevic, V. How recommendation conciseness shapes customers’ attitudes toward the AI chatbot. Int. J. Contemp. Hosp. Manag. 2025, 37, 3559–3577. [Google Scholar] [CrossRef]
  21. Berger, J.; Packard, G. Wisdom from words: The psychology of consumer language. Consum. Psychol. Rev. 2023, 6, 3–16. [Google Scholar] [CrossRef]
  22. Fang, D.; Maglio, S.J. Time perspective and helpfulness: Are communicators more persuasive in the past, present, or future tense? J. Exp. Soc. Psychol. 2024, 110, 104544. [Google Scholar] [CrossRef]
  23. Parhankangas, A.; Renko, M. Linguistic style and crowdfunding success among social and commercial entrepreneurs. J. Bus. Ventur. 2017, 32, 215–236. [Google Scholar] [CrossRef]
  24. Gretry, A.; Horváth, C.; Belei, N.; van Riel, A.C. “Don’t pretend to be my friend!” When an informal brand communication style backfires on social media. J. Bus. Res. 2017, 74, 77–89. [Google Scholar] [CrossRef]
  25. Packard, G.; Moore, S.G.; McFerran, B. (I’m) happy to help (you): The impact of personal pronoun use in customer–firm interactions. J. Mark. Res. 2018, 55, 541–555. [Google Scholar] [CrossRef]
  26. Stephan, E.; Liberman, N.; Trope, Y. Politeness and psychological distance: A construal level perspective. J. Pers. Soc. Psychol. 2010, 98, 268–280. [Google Scholar] [CrossRef]
  27. Nepomuceno, M.V.; Laroche, M.; Richard, M.O. How to reduce perceived risk when buying online: The interactions between intangibility, product knowledge, brand familiarity, privacy and security concerns. J. Retail. Consum. Serv. 2014, 21, 619–629. [Google Scholar] [CrossRef]
  28. Salancik, G.R.; Pfeffer, J. A social information processing approach to job attitudes and task design. Adm. Sci. Q. 1978, 23, 224–253. [Google Scholar] [CrossRef]
  29. Spence, M. Signaling in retrospect and the informational structure of markets. Am. Econ. Rev. 2002, 92, 434–459. [Google Scholar] [CrossRef]
  30. Boateng, S.L. Online relationship marketing and customer loyalty: A signaling theory perspective. Int. J. Bank Mark. 2019, 37, 226–240. [Google Scholar] [CrossRef]
  31. Wirtz, J.; Patterson, P.G.; Kunz, W.H.; Gruber, T.; Lu, V.N.; Paluch, S.; Martins, A. Brave new world: Service robots in the frontline. J. Serv. Manag. 2018, 29, 907–931. [Google Scholar] [CrossRef]
  32. Chauhan, R.; Mehra, P. Abstract or concrete language style? How chatbots of online travel agencies should apologise to customers. Asia Pac. J. Tour. Res. 2025, 30, 285–299. [Google Scholar] [CrossRef]
  33. Park, J.; Yoo, J.W.; Cho, Y.; Park, H. Examining the impact of service robot communication styles on customer intimacy following service failure. J. Retail. Consum. Serv. 2023, 75, 103511. [Google Scholar] [CrossRef]
  34. Wang, S.; Yan, Q.; Wang, L. Task-oriented vs. social-oriented: Chatbot communication styles in electronic commerce service recovery. Electron. Commer. Res. 2025, 25, 1793–1825. [Google Scholar] [CrossRef]
  35. Hu, Q.; Pan, Z. Is cute AI more forgivable? The impact of informal language styles and relationship norms of conversational agents on service recovery. Electron. Commer. Res. Appl. 2024, 65, 101398. [Google Scholar] [CrossRef]
  36. Lu, Z.; Min, Q.; Jiang, L.; Chen, Q. The effect of the anthropomorphic design of chatbots on customer switching intention when the chatbot service fails: An expectation perspective. Int. J. Inf. Manag. 2024, 76, 102767. [Google Scholar] [CrossRef]
  37. Yan, J.; Luo, B.; Zhang, K. Cute or competent? Contextual dynamics of consumer acceptance toward service robots. J. Hosp. Mark. Manag. 2025, 34, 681–703. [Google Scholar] [CrossRef]
  38. Huang, Y.; Gursoy, D. Customers’ online service encounter satisfaction with chatbots: Interaction effects of language style and decision-making journey stage. Int. J. Contemp. Hosp. Manag. 2024, 36, 4074–4091. [Google Scholar] [CrossRef]
  39. Zhu, J.; Jiang, Y.; Wang, X.; Huang, S. Social-or task-oriented: How does social crowding shape consumers’ preferences for chatbot conversational styles? J. Res. Interact. Mark. 2023, 17, 641–662. [Google Scholar] [CrossRef]
  40. Chen, J.; Li, M.; Ham, J. Different dimensions of anthropomorphic design cues: How visual appearance and conversational style influence users’ information disclosure tendency towards chatbots. Int. J. Hum. Comput. Stud. 2024, 190, 103320. [Google Scholar] [CrossRef]
  41. Baek, T.H.; Kim, H.J.; Kim, J. AI-generated recommendations: Roles of language style, perceived AI human-likeness, and recommendation agent. Int. J. Hosp. Manag. 2025, 126, 104106. [Google Scholar] [CrossRef]
  42. Chen, J.; Ke, W.; Wu, Y.; Zhang, J. Do cute language style chatbots make consumers more unethical? Int. J. Inf. Manag. 2025, 85, 102941. [Google Scholar] [CrossRef]
  43. Dean, D.L.; Hender, J.; Rodgers, T.; Santanen, E. Identifying good ideas: Constructs and scales for idea evaluation. J. Assoc. Inf. Syst. 2006, 7, 646–699. [Google Scholar]
  44. Guilford, J.P. Creativity: Yesterday, today and tomorrow. J. Creat. Behav. 1967, 1, 3–14. [Google Scholar] [CrossRef]
  45. Sevier, D.C.; Jablokow, K.; McKilligan, S.; Daly, S.R.; Baker, I.N.; Silk, E.M. Towards the development of an elaboration metric for concept sketches. In Proceedings of the ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Cleveland, OH, USA, 6–9 August 2017. [Google Scholar]
  46. Jensen, M.L.; Averbeck, J.M.; Zhang, Z.; Wright, K.B. Credibility of anonymous online product reviews: A language expectancy perspective. J. Manag. Inf. Syst. 2013, 30, 293–324. [Google Scholar] [CrossRef]
  47. Fiske, S.T.; Cuddy, A.J.C.; Glick, P. Universal dimensions of social cognition: Warmth and competence. Trends Cogn. Sci. 2007, 11, 77–83. [Google Scholar] [CrossRef] [PubMed]
  48. Fiske, S.T.; Cuddy, A.J.C.; Glick, P.; Xu, J. A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. In Social Cognition; Routledge: London, UK, 2018; pp. 162–214. [Google Scholar]
  49. Yang, L.W.; Aggarwal, P.; McGill, A.L. The 3 C’s of anthropomorphism: Connection, comprehension, and competition. Consum. Psychol. Rev. 2020, 3, 3–19. [Google Scholar] [CrossRef]
  50. Cuddy, A.J.C.; Glick, P.; Beninger, A. The dynamics of warmth and competence judgments, and their outcomes in organizations. Res. Organ. Behav. 2011, 31, 73–98. [Google Scholar] [CrossRef]
  51. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach; Prentice Hall: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
  52. Burgoon, M.; Miller, G.R. An expectancy interpretation of language and persuasion. In Recent Advances in Language, Communication, and Social Psychology; Giles, H., St. Clair, R., Eds.; Academic Press: Orlando, FL, USA, 1985; pp. 199–229. [Google Scholar]
  53. Brown, P. Politeness: Some Universals in Language Usage; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
  54. Novak, L.; Malinakova, K.; Mikoska, P.; van Dijk, J.P.; Tavel, P. Neural correlates of compassion—An integrative systematic review. Int. J. Psychophysiol. 2022, 172, 46–59. [Google Scholar] [CrossRef]
  55. Van Doorn, J.; Mende, M.; Noble, S.M.; Hulland, J.; Ostrom, A.L.; Grewal, D.; Petersen, J.A. Domo arigato Mr. Roboto: Emergence of automated social presence in organizational frontlines and customers’ service experiences. J. Serv. Res. 2017, 20, 43–58. [Google Scholar]
  56. Yu, S.; Zhao, L. Emojifying chatbot interactions: An exploration of emoji utilization in human–chatbot communications. Telemat. Inform. 2024, 86, 102071. [Google Scholar] [CrossRef]
  57. Clark, M.S.; Mills, J. The difference between communal and exchange relationships: What it is and is not. Pers. Soc. Psychol. Bull. 1993, 19, 684–691. [Google Scholar] [CrossRef]
  58. Günürkün, P.; Haumann, T.; Mikolon, S. Disentangling the differential roles of warmth and competence judgments in customer–service provider relationships. J. Serv. Res. 2020, 23, 476–503. [Google Scholar] [CrossRef]
  59. Iacobucci, D.; Ostrom, A. Gender differences in the impact of core and relational aspects of services on the evaluation of service encounters. J. Consum. Psychol. 1993, 2, 257–286. [Google Scholar] [CrossRef]
  60. Li, X.; Chan, K.W.; Kim, S. Service with emoticons: How customers interpret employee use of emoticons in online service encounters. J. Consum. Res. 2019, 45, 973–987. [Google Scholar] [CrossRef]
  61. Han, E.; Yin, D.; Zhang, H. Bots with feelings: Should AI agents express positive emotion in customer service? Inf. Syst. Res. 2023, 34, 1296–1311. [Google Scholar] [CrossRef]
  62. Li, Z.; Tao, W.; Wu, L. Examining the joint impact of relationship norms and service failure severity on consumer responses. J. Public Relat. Res. 2020, 32, 76–91. [Google Scholar] [CrossRef]
  63. Solomon, M.R.; Surprenant, C.; Czepiel, J.A.; Gutman, E.G. A role theory perspective on dyadic interactions: The service encounter. J. Mark. 1985, 49, 99–111. [Google Scholar] [CrossRef]
  64. Chung, M.; Ko, E.; Joung, H.; Kim, S.J. Chatbot e-service and customer satisfaction regarding luxury brands. J. Bus. Res. 2020, 117, 587–595. [Google Scholar] [CrossRef]
  65. Lv, X.; Luo, J.; Liang, Y.; Li, C. Is cuteness irresistible? The impact of cuteness on customers’ intentions to use AI applications. Tour. Manag. 2022, 90, 104472. [Google Scholar] [CrossRef]
  66. Townsend, C.; Sood, S. Self-affirmation through the choice of highly aesthetic products. J. Consum. Res. 2012, 39, 415–428. [Google Scholar] [CrossRef]
  67. Hayes, A.F. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach; Guilford Publications: New York, NY, USA, 2017. [Google Scholar]
  68. Mills, J.; Clark, M.S. Communal and exchange relationships: Controversies and research. In Theoretical Frameworks for Personal Relationships; Erber, R., Gilmour, R., Eds.; Psychology Press: London, UK, 2013; pp. 29–42. [Google Scholar]
  69. Chang, W.; Kim, K.K. Appropriate service robots in exchange and communal relationships. J. Bus. Res. 2022, 141, 462–474. [Google Scholar] [CrossRef]
Figure 1. Conceptual model.
Figure 1. Conceptual model.
Jtaer 21 00051 g001
Figure 2. The mediating effect of warmth.
Figure 2. The mediating effect of warmth.
Jtaer 21 00051 g002
Figure 3. The moderating effect of relationship norm orientation.
Figure 3. The moderating effect of relationship norm orientation.
Jtaer 21 00051 g003
Table 1. Research on chatbots’ language style.
Table 1. Research on chatbots’ language style.
ContextLanguage StyleOutcomeMediator (s)Moderator (s)Source
Service recoveryInformal vs. formalNegative word of mouthIntimacy and angerFailure severityPark et al. [33]
Concrete vs. abstractCustomer forgivenessPerceived firm sincerity and empathyService failure severityChauhan and Mehra [32]
Social-oriented vs. task-orientedService recovery satisfactionCognition-based trust/affect-based trustTask complexityWand et al. [34]
Literal vs. whimsical vs. kindchenschema cutenessCustomer forgivenessWarmth and competence perceptionRelationship norm orientationHu and Pan [35]
Social-oriented vs. task-orientedSwitching intentionService recovery expectationsTask criticalityLu et al. [36]
Cute vs. competentWillingness to use the service robotPerceived entertainment and perceived competenceService context/relationship orientationYan et al. [37]
Service responseConcrete vs. abstract Customer satisfactionEmpathic accuracyCutenessZhu et al. [11]
Informal vs. formalCustomers’ intention to continue usage and brand attitudeParasocial interactionBrand affiliationLi and Wang [10]
Concrete vs. abstract Customer satisfactionEmotional support, informational supportDecision-making-journey stageHuang and Gursoy [38]
Social-oriented vs. task-orientedConsumer acceptancewarmth and competenceSocial crowding environmentZhu et al. [39]
Human-like vs. mechanicalInformation disclosure tendencyPerceived securityVisual appearanceChen et al. [40]
Figurative vs. literal Acceptance of AI-generated recommendationsImagery vividnessType of agent and perceived AI human-likenessBaek et al. [41]
Non-cute vs. cuteUnethical behaviorMoral self-concept and emotional arousalProfile picture typeChen et al. [42]
Concise vs. verboseAttitudes toward
the AI chatbot
Decision ComfortConsumption Context/social distanceYu et al. [20]
Succinct vs. ElaborateCustomer satisfactionWarmthRelationship norm orientationOur paper
Table 2. Participant demographics in Studies 1–3.
Table 2. Participant demographics in Studies 1–3.
Study 1Study 2Study 3
FrequencyPercentageFrequencyPercentageFrequencyPercentage
Gender
Male7030.43517.511032.0
Female16069.616582.523468.0
Age
18–257532.6119999.510430.23
26–3510746.5210.516547.97
36–453615.65006418.60
46–60114.7800113.20
≥6110.430000
Education
Middle school or below41.7178.5123.5
Junior college125.210.5319.0
University17977.816180.524370.6
Postgraduate3515.22110.55816.8
Table 3. Regression output for floodlight analysis.
Table 3. Regression output for floodlight analysis.
CoefficientSEtpLLCIULCI
Constant6.571.145.780.0004.348.81
Chatbot language style−1.980.76−2.600.0097−3.4693−0.4814
Relationship norm orientation−0.240.26−0.940.3458−0.74590.2621
Interaction effect0.490.172.850.00470.15040.8216
Model summaryRR2Fdf1df2p
0.330.1114.0833400.0000
Note: dependent variable—service satisfaction.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fan, Y.; Yue, X.; Zhang, X.; Zhang, L. Elaborate or Succinct? The Impact of AI Chatbots’ Language Style on Customers’ Satisfaction in Online Service. J. Theor. Appl. Electron. Commer. Res. 2026, 21, 51. https://doi.org/10.3390/jtaer21020051

AMA Style

Fan Y, Yue X, Zhang X, Zhang L. Elaborate or Succinct? The Impact of AI Chatbots’ Language Style on Customers’ Satisfaction in Online Service. Journal of Theoretical and Applied Electronic Commerce Research. 2026; 21(2):51. https://doi.org/10.3390/jtaer21020051

Chicago/Turabian Style

Fan, Yafeng, Xiaohui Yue, Xiadan Zhang, and Luyao Zhang. 2026. "Elaborate or Succinct? The Impact of AI Chatbots’ Language Style on Customers’ Satisfaction in Online Service" Journal of Theoretical and Applied Electronic Commerce Research 21, no. 2: 51. https://doi.org/10.3390/jtaer21020051

APA Style

Fan, Y., Yue, X., Zhang, X., & Zhang, L. (2026). Elaborate or Succinct? The Impact of AI Chatbots’ Language Style on Customers’ Satisfaction in Online Service. Journal of Theoretical and Applied Electronic Commerce Research, 21(2), 51. https://doi.org/10.3390/jtaer21020051

Article Metrics

Back to TopTop