Next Article in Journal
Multimodal Deep Learning Framework for Automated Usability Evaluation of Fashion E-Commerce Sites
Previous Article in Journal
Digital Nomads as Unintentional Influencers in Destination Branding: A Multi-Method Study of Ambient Influence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

When AI Chatbots Ask for Donations: The Construal Level Contingency of AI Persuasion Effectiveness in Charity Human–Chatbot Interaction

Business School, University of International Business and Economics, Beijing 100029, China
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 341; https://doi.org/10.3390/jtaer20040341 (registering DOI)
Submission received: 12 September 2025 / Revised: 2 November 2025 / Accepted: 6 November 2025 / Published: 3 December 2025

Abstract

As AI chatbots are increasingly used in digital fundraising, it remains unclear which communication strategies are more effective in enhancing consumer trust and donation behavior. Drawing on construal level theory and adopting a human-AI interaction perspective, this research examines how message framing in AI-mediated persuasive communication shapes trust and donation willingness. Across four studies, we find that when AI chatbots employ high-level construal (abstract) message framing, consumers perceive the information as less credible compared to when the same message is delivered by a human agent. This reduced message credibility weakens trust in the charitable organization through a trust transfer mechanism, ultimately lowering donation intention. Conversely, low-level construal (concrete) framing enhances both trust and donation willingness. Moreover, the negative impact of abstract message framing by AI chatbots is significantly attenuated when the chatbot features anthropomorphic visual cues, which increase perceived credibility and restore trust and donation willingness. These findings reveal potential risks in deploying AI chatbots for interactive fundraising marketing and offer practical insights for nonprofit organizations seeking to leverage AI in donor engagement.

1. Introduction

As interactive websites increasingly integrate artificial intelligence (AI), AI-driven chatbots have become an essential interface for enabling personalized and dialogic interactions in digital environments. Within this dynamic context, industry projections estimate that the global conversational AI market will reach USD 1.158 billion by 2030, underscoring the growing deployment of AI agents across both commercial and non-profit organizations [1].
In practice, both commercial enterprises and nonprofit organizations have begun deploying AI chatbots to enhance interactive engagement and communication efficiency. Beyond commercial enterprises, charitable organizations, in particular, have adopted AI chatbots to engage potential donors, respond to inquiries, and optimize fundraising operations [2,3,4]. These applications underscore the growing relevance of AI chatbots as interactive mediators in nonprofit marketing, where they shape how donors experience digital interactions and perceive organizational credibility [5,6,7,8].
Despite this progress, it remains unclear how the use of AI chatbots, compared with traditional human service agents, influences consumers’ experiences and donation behaviors on interactive online fundraising websites. On one hand, interacting with AI rather than human representatives may elicit a more utilitarian processing mindset [9], reducing consumers’ perceived autonomy and subsequently lowering donation intentions [10]. Conversely, some studies have shown that AI agents are perceived as more objective and fairer than human agents, which can enhance consumer trust and foster prosocial behaviors [11,12,13].
Given this background, the extant literature reveals two major limitations. The first gap concerns the lack of understanding of how interaction design, particularly message framing, affects consumers’ perceptions and experiences in human–AI communication. Specifically, most research on improving AI persuasive power has focused on agent-related features of AI (e.g., transparency, social vs. task-oriented communication style), contextual factors (e.g., hedonic vs. utilitarian contexts, prosocial or moral contexts, symbolic consumption context) and individual differences (e.g., consumers’ AI acceptance, perceived identity threat, or need for uniqueness) [14]. However, little is known about how variations in message framing, specifically whether the message emphasizes concrete implementation details or abstract values, influence consumers’ experiential responses and interactive engagement with AI agents.
The second gap lies in the insufficient attention to how human–AI chatbot interaction influences consumer trust in charitable organizations. Unlike commercial enterprises, charitable organizations rely heavily on voluntary public support driven by moral motivation, with consumer trust constituting a core foundation for the survival of organizations. Nevertheless, most research on AI communication has been conducted in profit-oriented contexts, focusing on purchase intentions or service satisfaction [15,16,17]. As a result, little is known about how AI-mediated dialog can foster or undermine trust in charitable organizations.
Failing to address these two gaps is problematic. In digital fundraising environments, consumers rely heavily on interaction experience to infer the credibility and integrity of charitable organizations. When message framing fails to match the perceived nature of the communication agent, the persuasive effectiveness of AI chatbots may diminish. Moreover, because consumers often exhibit algorithm aversion—expressing skepticism toward the warmth, autonomy, and transparency of AI [18,19]—designing interaction strategies that mitigate such aversion is vital. Without a nuanced understanding of how message framing and chatbot characteristics jointly shape trust, organizations risk undermining both donor engagement and fundraising outcomes in increasingly AI-mediated contexts [20,21].
Against this backdrop, the present study aims to examine how AI chatbots can effectively interact with consumers in online non-profit fundraising contexts to enhance trust in charitable organizations and increase donation intentions. Drawing upon Construal Level Theory [22], this paper proposes that in online non-profit fundraising, persuasion by AI chatbots is more effective when using low-level construal (i.e., concrete message framing) rather than high-level, abstract message framing. Figure 1 illustrates the theoretical model of this research. Four studies were conducted to examine the proposed hypotheses. The results show that matching AI chatbots with concrete message framing significantly enhanced consumer trust in charitable organizations and increased donation intentions, compared to matching chatbots with abstract message framing. In contrast, for human agents, the message framing type (concrete vs. abstract) does not significantly affect trust or donation intentions. This is because people generally believe that AI operates based on pre-programmed algorithms and lacks the capacity to understand abstract values and goals, which in turn undermines the credibility of the message when it is presented in association with abstract content. However, when the AI agent is designed with anthropomorphic design, consumers are more likely to perceive the abstract-framed message as credible, which in turn strengthens their trust in the charitable organization employing AI and increases their willingness to donate.
This study makes three major contributions. First, extending prior research that emphasizes consumers’ aversion to AI technologies [23], this study demonstrates that aligning AI chatbots with a low-level construal communication style can effectively foster prosocial behaviors by positioning the AI as a more credible source. Second, it advances the application of construal level theory within the domain of interactive marketing by showing that the alignment between the AI agent and low-construal message framing enhances consumer trust in non-profit organizations. Third, this research contributes to the growing literature on AI anthropomorphism and human–AI interaction [24]. While consumers generally view AI as less appropriate for conveying abstract messages involving values and vision, this study reveals that endowing AI chatbots with anthropomorphic features can mitigate the negative effects of abstract message framing. The remainder of this paper reviews the relevant literature, develops hypotheses, elaborates the research design and methodology, reports the findings, and concludes by discussing the theoretical and practical implications, limitations, and avenues for future research.

2. Theoretical Framework and Hypotheses

2.1. The Human-AI Chatbot Interaction and Marketing Persuasion

Powered by artificial intelligence, chatbots have become integral to interactive marketing by enabling natural language dialogs that shape consumer attitudes, engagement, and behavioral responses. Through real-time interaction, chatbots can personalize communication, provide adaptive feedback, and enhance experiential value across multiple digital touchpoints [8]. They have been widely adopted across various contexts, including customer service and product or service recommendation, as tools for enriching consumer experiences [25,26]. In marketing communication, these systems are increasingly regarded not merely as technical interfaces but as socially capable agents that engage consumers in dialogic persuasion [6,7]. This shift reflects the Computers Are Social Actors (CASA) paradigm, which posits that individuals apply social rules and heuristics when interacting with computers or AI systems, responding to them as they would to human communication agents [27].
Building on CASA, researchers have further expanded the theoretical foundation for understanding human–AI interaction, arguing that AI systems are no longer perceived merely as passive tools but as autonomous social agents endowed with perceived agency, capable of initiating, shaping, and influencing human attitudes and behaviors [5]. Specifically, AI agents are often perceived as highly competent but emotionally distant, a stereotype that maps onto the warmth–competence framework of social perception [28]. Such dual perceptions reflect the two fundamental dimensions of competence and warmth, similar to perceptions of human communication agents [29]. These perceptions are influenced by prevalent AI stereotypes, which often depict AI as highly competent but emotionally cold. These stereotypes have direct implications for persuasion: AI agents perceived as empathetic, warm, and socially responsive are more likely to elicit engagement and compliance, while those seen as purely utilitarian or mechanistic may provoke psychological distance and algorithm aversion [5,19]. Therefore, the persuasiveness of AI depends not only on the accuracy of information transmission but also on consumers’ stereotypes toward it, as well as the social and emotional cues conveyed in human-AI interaction.
Recent research highlights the importance of contextual fit between AI agent characteristics and message design. Consumers perceive AI agents as more suitable for tasks that emphasize objectivity and ethical reasoning—for example, recommending products framed around health or environmental benefits—because AI is viewed as fairer and less biased than human persuaders [12]. Conversely, for experience-based or emotionally rich products, human agents tend to be more persuasive due to their capacity for affective resonance [19,30]. Thus, the effectiveness of AI persuasion depends on whether its perceived attributes align with the nature of the communication task.
In charitable marketing, AI chatbots are increasingly used in promotional activities and online fundraising initiatives to disseminate information, answer donor queries, and encourage donations. However, research findings remain inconsistent. Empirical evidence suggests that employing AI chatbots may negatively affect consumer donation intentions, because consumers interacting with AI systems tend to exhibit more utilitarian orientations than those interacting with human agents [9]. Similarly, AI-recommended charitable initiatives may induce feelings of restricted choice autonomy among consumers [10], thereby diminishing their willingness to donate. Conversely, other findings indicate that consumers perceive AI as more equitable and objective, enhancing fairness perceptions and prosocial intentions [12]. This inconsistency reflects both the positive and negative aspects of AI-related stereotypes, underscoring the necessity of examining the boundary conditions under which AI persuasion operates effectively in non-commercial, morally driven domains such as charitable giving.
Unlike commercial transactions [31], charitable donations lack tangible product attributes and measurable outcomes. Donors rely on the perceived trustworthiness of the organization and the quality of their interaction experience to guide decisions. When interacting with AI chatbots in online charity contexts, consumers have limited external cues to assess credibility, making the perception of online interaction a critical determinant of persuasive outcomes. Therefore, this study examines how the alignment between the type of communication agent (AI vs. human) and the message framing strategy can enhance both donation intentions and trust in charitable organizations.

2.2. Charitable Donation of Consumers

Charitable donation refers to the voluntary provision of resources by individuals or organizations to non-kin others or public welfare causes, representing a specific manifestation of prosocial behavior in the philanthropic domain [32]. The value exchange between donors and charitable organizations means they can be viewed as consumers in this context [33]. While charitable contributions derive from multiple sources (individuals, foundations, bequests, and corporations), the current research specifically examines individual-level consumer behavior in charitable contexts. Therefore, “consumer” in this research specifically refers to individuals who voluntarily donate funds to benefit others (non-immediate family) or organizations [34]. Charitable behavior thus embodies a triadic interaction among donors, beneficiaries, and fundraising intermediaries, wherein the interactional quality between donors and organizational representatives (e.g., staff or AI-based chatbots) critically shapes trust and decision-making [32]. As a pivotal behavior in philanthropy, charitable donations fundamentally determine the capacity of charitable organizations to secure funding and execute their initiatives.
A substantial body of literature has explored the antecedents of consumers’ prosocial donation behavior, which can be categorized into personal, situational, and organizational factors. At the personal level, demographic characteristics such as gender, age, and income, together with personality traits such as agreeableness, empathy, moral identity, and social value orientation, are found to determine the likelihood of giving, with empathy and moral identity identified as particularly potent emotional and cognitive motivators [35,36,37,38,39]. At the situational level, ingroup identification, perceived interpersonal similarity, and social learning from family role models can strengthen prosocial behavior [40,41]. Moreover, organizational characteristics are crucial in shaping donation decisions. Factors such as institutional transparency, organizational reputation, perceived efficacy, and the design of fundraising appeals—including group entitativity, the identified victim effect, and message concreteness—substantially affect donors’ intentions by conveying credibility, legitimacy, and moral significance [34,42,43].
While these factors have been widely validated in traditional philanthropic settings, the rise in digital fundraising platforms has reshaped how donors perceive credibility, transparency, and trust in charitable organizations. Online fundraising introduces new interactive touchpoints where mediated communication, rather than face-to-face contact, becomes the primary basis for trust formation. Within these interactive environments, AI-driven chatbots now serve as communication interfaces that facilitate personalized and real-time donor engagement. Meanwhile, the optimization of message features for AI chatbot-mediated donation persuasion in online fundraising within the realm of digital philanthropy still lacks sufficient scholarly attention. Accordingly, this study investigates and explores the differential impacts of message framing types on donation behavior within human-AI conversational interfaces in online fundraising campaigns.

2.3. Consumer Trust in the Age of AI

Trust serves as a foundational construct in interactive marketing, shaping individuals’ willingness to engage in online exchanges and co-created experiences. Within digital fundraising contexts, it plays a particularly critical role in reducing perceived risk and uncertainty, as donors must decide whether to contribute to intangible outcomes and unseen beneficiaries. Gefen emphasized that trust is a key factor in reducing uncertainty and perceived risk in digital transactions [44]. It is generally defined as a psychological state of willingness to accept vulnerability based on positive expectations of another party’s motives or behaviors [45]. Consumer trust reflects individuals’ belief in the dependability and ethical standards of trust targets, including commercial brands, organizations or technological solutions [46]. The object of trust can be categorized into three dimensions: interpersonal, institutional, and dispositional [47]. Interpersonal trust reflects confidence in specific others, institutional trust relates to confidence in systems or organizations, and dispositional trust represents an individual’s general tendency to trust across contexts.
As artificial intelligence technologies increasingly mediate consumer–brand interactions, scholars have recognized that trust also emerges within human–AI interactive exchanges [48], which emphasizes the willingness to be influenced and to take action, without restricting the concept to human relationships [45]. This makes it applicable to technological trust, including trust in artificial intelligence-driven virtual influencers and chatbots [49]. Prior literature suggests that consumers no longer perceive AI merely as a tool [5] but evaluate it as a “social actor,” assessing its fairness, empathy, and transparency, which collectively shape their trust responses. When AI is perceived as overly autonomous or opaque, it can trigger algorithm aversion, thereby diminishing consumers’ willingness to rely on its recommendations.
Accordingly, online trust differs from offline trust because consumers interact with websites rather than physical stores. The relatively higher outcome invisibility of donations exposes contributors to greater vulnerability [50]. Consequently, traditional trust-building mechanisms are difficult to replicate in digital environments [51], posing new challenges for charitable organizations seeking to establish credibility and reliability online. Specifically, this means that in digital fundraising, donor trust should include trust in the charitable organization itself, trust in the AI agent that mediates the communication, and trust in the charitable message being delivered. In other words, faced with high information asymmetry, algorithmic opacity, and diffused moral responsibility characteristic of the digital fundraising environment, online donors often rely on system cues, platform design, communication methods, and the tone of messages to infer the trustworthiness of both the technology and the organization behind it.
Drawing on trust transfer theory, when consumers perceive consistent and frequent interaction between two related entities, trust in a familiar target may transfer to an unfamiliar one [52]. Recent research in interactive marketing supports this logic, showing that consumers’ trust in an influencer account can transfer to the posts it generates [15]. Applying this theoretical notion to human–AI interaction in online fundraising, the AI chatbot can be regarded as the familiar target, while the credibility of the message delivered by the chatbot serves as a perceptual conduit for trust transfer. In this process, consumers’ confidence in the accuracy and authenticity of AI-generated messages can shape their subsequent trust in the charitable organization behind the communication. Unfortunately, extant literature has largely focused on technological trust in AI systems, while overlooking this potential trust transfer mechanism from AI-mediated information to nonprofit organizations. Therefore, the second focus of this research is to examine how different types of message framing influence consumers’ perceived message credibility during human–AI chatbot interaction, and how this perception subsequently affects their trust and donation intention toward charitable organizations.
To address this question, the CASA framework suggests that users often employ human-like heuristics when interacting with AI chatbots, attributing traits such as competence and objectivity to them [5]. Therefore, understanding consumers’ stereotypes about AI and strategically aligning message framing accordingly can not only mitigate their potential aversion toward AI, but also leverage their cognitive tendencies to enhance trust in charitable organizations. From this perspective, construal level theory further provides valuable insight by proposing that communication and persuasion are most effective when the construal level of the message source aligns with that of the message content, laying a foundation for analyzing how message framing and stereotypes toward AI chatbots jointly shape donor trust.

2.4. Message Framing and AI Persuasion in Online Fundraising

Construal level theory (CLT) elucidates how psychological distance influences individuals’ level of mental construal or interpretation of events [53,54]. Within interactive marketing research, CLT provides a valuable lens for understanding how the level of message abstraction influences consumers’ engagement and persuasion outcomes across digital touchpoints [22]. Information in the message refers to persuasive communication designed to influence attitudes, intentions, and ultimately behaviors, such as health promotion messages. An information topic denotes the central argument or belief that the message conveys [55]. Specifically, message topics and designs related to construal level theory can be categorized into two dimensions: high-level and low-level construal. The former involves “why-” or value-oriented themes, distant temporal/spatial frames, and non-narrative formats; whereas the latter encompasses “How”-oriented themes, proximal temporal/spatial frames, and narrative formats [56].
In the context of charitable advertising and marketing, fundraisers may employ either high-level (abstract) or low-level (concrete) message framing to persuade consumers [57]. High-level construal framing emphasizes the abstract goals, fundamental causes, and overarching mission of the charity; low-level construal framing focuses on specific implementation details, including fund allocation, concrete action plans, and beneficiary support mechanisms [58]. From an interactive marketing perspective, the key distinction between these two framing strategies lies in the extent to which the message conveyed emphasizes higher-order goals and abstract values—features that are largely absent in concrete framing. Behaviors associated with higher-order goals, such as intentions and values, correspond to higher construal levels because such goals tend to be future-oriented, not immediately attainable, concerned with broad social groups rather than specific individuals, and characterized by greater uncertainty in realization [22]. Consequently, high construal level information is typically associated with greater temporal, spatial, social, and hypothetical distance. In contrast, concrete framing involves psychologically proximal details and thus corresponds to a lower construal level.
Prior research indicates that AI agents’ persuasive effectiveness varies depending on product type, communication style, and other contextual factors. Consumers generally hold the lay belief about AI that it follows preset programming and lacks subjective perception or complex abstract thinking [59]. AI behavior is thus viewed as not driven by subjective goals or intentions [60]. This lay belief leads consumers to perceive AI agents as entities operating at a low construal level—more adept at handling concrete, low-level content—whereas humans are perceived as agents capable of high-level construal [22]. Therefore, in terms of message framing, consumers may implicitly consider AI chatbots more suitable for delivering concrete, low-level information.
Previous evidence shows that optimal matching between conversation agent type and message framing reduces cognitive load in information processing and enhances communication and persuasion outcomes [61]. For example, when the information source and message features are congruent, recipients perceive the appeal and persuasive message as more reasonable and credible [62]. Integrating the perspective of CLT, it can be inferred that when the perceived construal level of an AI chatbot matches the construal level of the message framing, consumers are likely to experience greater consistency and rationality, which in turn enhances perceived message credibility. Accordingly, this study predicts that, compared to other scenarios, consumers will perceive higher message credibility when the use of AI chatbots is congruent with concrete message framing.

2.5. Perceived Message Credibility, Consumer Trust and Donation Intention

Message credibility refers to an individual’s subjective perception of the truthfulness and reliability of received information [63]. When individuals judge the credibility of information, they primarily consider three aspects: (1) who the sender of the information is (source credibility); (2) through which channel the information is presented (media credibility); and (3) how the information itself is constructed (message credibility) [64]. In the domain of marketing communication, enhancing message credibility is regarded as a key strategy to improve persuasiveness and communication effectiveness.
Credibility is multidimensional and relates to perceived competence, character, composure, dynamism, and sociability [65]. Communication agents or messages can gain credibility by conveying expertise and authority (competence dimension) as well as being perceived as honest and trustworthy (character dimension) [66]. Message credibility significantly influences consumers’ judgment and decision-making processes [64]. For example, higher clarity of advertising information increases consumers’ credibility evaluations, which in turn strengthens their purchase intentions.
Previous studies have shown that message credibility may stem from a matching effect. Based on construal level theory, a previous study found that the congruence between consumers’ construal levels and regulatory orientation enhanced consumers’ attitudes toward different product attributes [67]. Similarly, Jäger et al. demonstrated that the alignment of low-level construal advertising with consumers’ familiar altruistic slogans increased perceived credibility of green product information [68].
In decision-making scenarios, especially when objective credibility cues are lacking, individuals tend to rely on auxiliary cues to make trust judgments. According to trust transfer theory, trust in an information source can be transferred to the target entity in the communication process [52]. In the context of online charity fundraising, where consumers have no direct contact with charity representatives (e.g., fundraisers or beneficiaries) and information on digital platforms is limited, a higher congruence between the AI chatbot and message framing enhances perceived message credibility. This in turn may increase consumers’ trust in the communication agent [69], thereby strengthening their trust in the charitable organization itself and their willingness to donate. Conversely, negative trust transfer may trigger algorithm aversion, which can undermine the persuasive effectiveness of AI chatbots and diminish consumer trust and donation intentions. Based on the above, this study proposes the following hypotheses:
H1. 
When the communication agent is an AI chatbot (vs. a human), using concrete (vs. abstract) message framing will more effectively enhance consumers’ (a) trust in the charitable organization and (b) donation intention.
H2. 
Perceived message credibility mediates the interactive effect of communication agent type (AI chatbot vs. human) and message framing type (concrete vs. abstract) on consumers’ (a) trust in the charitable organization and (b) donation intention.

2.6. The Moderation Effect of Anthropomorphism

Anthropomorphism of AI refers to the extent to which AI systems mimic human characteristics in appearance, behavior, and communication style [70]. For example, AI may possess human-like physical features, utilize natural language for interaction, and express human-like emotions and social strategies during communication [71]. In the present study, anthropomorphism specifically refers to appearance anthropomorphism. Although consumers generally perceive AI as a “mechanized” entity [72] with relatively low levels of interpretative ability, increasing the degree of appearance anthropomorphism in AI chatbots may mitigate the negative cognitive biases associated with such mechanistic perceptions.
Prior research demonstrates that higher levels of anthropomorphism enhance consumers’ perceptions of AI warmth and competence [73], thereby increasing trust and acceptance [74,75]. Because human-like appearance tends to prompt attribution of human traits to non-human machines, and people generally hold more favorable attitudes toward objects with human-like features, anthropomorphized AI is more likely to be ascribed emotions and intentionality [76]. As emotions and intentionality are closely linked to higher-order goals in behavior, AI agents with greater anthropomorphism may be perceived by consumers as capable of engaging in more abstract and sophisticated cognitive processes—i.e., possessing higher perceived interpretative levels. Therefore, when AI chatbots exhibit a high degree of appearance anthropomorphism, the persuasive messages they convey are more congruent with consumers’ perceptions of their interpretative abilities. In contrast, AI chatbots that lacks anthropomorphic cues experience a mismatch between the communication agent’s identity and the framing of the message when delivering abstract and high-level information, which significantly undermines their persuasive effectiveness. Based on the above rationale, the following hypotheses are proposed:
H3. 
The level of anthropomorphism moderates the interactive effect of communication agent type (AI chatbot vs. human) and message framing type (concrete vs. abstract) on consumers’ (a) trust in charitable organizations and (b) donation intention via perceived message credibility. Specifically, only when the AI chatbot lacks anthropomorphic cues does the use of concrete rather than abstract message framing significantly enhance (a) consumer trust and (b) donation intention; for anthropomorphized AI chatbots and human agents, the effect of message framing type on (a) consumer trust and (b) donation intention is not significant.

3. Study 1

Study 1 aimed to test Hypotheses 1a and 1b, which proposed that when an AI chatbot (vs. a human) serves as the communication agent, using a concrete (vs. abstract) message framing will be more effective in enhancing consumers’ trust in a charitable organization and their willingness to donate. The study adopted a 2 (communication agent: AI vs. human) × 2 (message framing: abstract vs. concrete) between-subjects experimental design. Participants were randomly assigned to one of the four conditions.
To clarify research ethics considerations, this study was conducted as a non-interventional social science investigation that involved no physical, psychological, or clinical interventions and collected no sensitive or personally identifiable information. All questionnaire responses were obtained anonymously, securely stored, and used solely for academic purposes, with results reported only in aggregated form. In line with Article 26 of the EU GDPR and the 2023 Ethical Review Measures of China’s National Health Commission, such anonymous, non-interventional research does not require ethics committee approval.
A priori power analysis using G*Power 3.1 indicated that a total sample of 274 participants would be required to detect a small-to-medium effect size (f = 0.17) with 80% statistical power at a 0.05 significance level. To ensure sufficient power, we recruited 294 participants through Credamo, an established Chinese online participant recruitment platform with a sample pool exceeding three million users. Credamo provides access to consumers from a wide range of regions and age groups in China, ensuring broad sample representativeness. Moreover, the platform can automatically assign participants to experimental conditions, which helps maintain the internal validity of experimental designs. Numerous published consumer behavior studies have used Credamo as a data source, and its data quality has been recognized by multiple peer-reviewed journals [77].
To exclude participants who did not read the materials carefully, we included an attention check that required them to correctly recall the name of the charity project described in the stimulus materials. In total, 277 participants passed the attention check and were included in the final analysis. Among the participants, 71.50% were female, with an average age of 31.15 years (SD = 6.872), and most participants held a bachelor’s degree or above (91.40%). For the randomization check, the F-test results indicated no significant differences in participants’ demographic variables across experimental conditions, suggesting that random assignment was successfully achieved. Detailed results are presented in Table A2 in Appendix B. The data used in this study are available in Supplementary Materials.

3.1. Prcedure, Materials and Pretest

At the beginning of the experiment, participants were asked to imagine that they were browsing a charity donation website and came across a project called “Million Forest.” To learn more about the initiative, participants were instructed to open the customer service chat window on the webpage, where they interacted with an AI chatbot representing the charitable organization.
Specifically, under the abstract message framing condition, participants assigned to the AI communication agent condition saw a chatbot with a GPT-style icon and the name “CharityGPT” initiating the conversation. The message read:
“Hello, I am the [AI customer service] representative of this website. I’m pleased to introduce you to our fundraising project—the Million Forest Initiative! [Learn more] Every green forest in the desert begins with a heart willing to sow seeds. The Million Tree Planting Project is devoted to protecting and improving the ecological environment through afforestation. By joining the Million Tree Planting Project, you will have the opportunity to help restore ecosystems damaged by desertification and create a sustainable, greener planet for future generations. Your support will be a vital force in advancing the path toward green development. Let’s contribute a forest to the Earth of tomorrow! [Click here to donate]”
In the human communication agent condition, participants saw a human avatar and a typing indicator (“The other party is typing …”), indicating that they were conversing with a human staff member of the foundation. The message content remained the same across conditions.
In contrast, under the concrete message framing condition, participants in the AI communication agent condition also received a message from the CharityGPT chatbot, but with more specific and detailed content:
“Hello, I am the [AI customer service] representative of this website. I’m pleased to introduce you to our fundraising project—the Million Forest Initiative! [Learn more] The Inner Mongolia region has long suffered from desertification. Without proper intervention, desertification will continue to expand, encroaching on farmland and grasslands, severely impacting the environment and the livelihoods of nearby residents. This project plans to plant one million saplings in the Kubuqi Desert of Inner Mongolia. It will also engage local farmers and herders in sapling maintenance and cultivation, and dispatch two professional forestry workers to supervise and assess the growth of the trees. Take action now—join the project and plant a tree with us! [Click here to donate]”
In the human communication agent condition, the same message was presented with a human avatar and a typing prompt indicating interaction with a human representative. The experimental materials are provided in Appendix A.
To measure trust in the charitable organization, the study adopted the scale developed by Li et al. [78]. Example items included “I have no doubt that this charitable organization is trustworthy,” and “I feel that this charitable organization is reliable” (α = 0.834). Participants’ willingness to donate was measured using an adapted version of the participation intention scale from Grinstein and Kronrod [79]. Items included “I am willing to donate to this project through the organization,” and “I am likely to donate to this project through the organization” (1 = Not at all; 7 = Very much agree; α = 0.807). Finally, demographic information was collected at the end of the experiment. The full wording of the measurement items is provided in Table A1 in Appendix B.
According to CLT, information emphasizing purpose and values—that is, “why” themes—tends to be construed at a high level (i.e., abstract framing). In contrast, information emphasizing processes and means—that is, “how” themes—tends to be construed at a low level (i.e., concrete framing). Based on these theoretical distinctions and drawing on real-world charity donation campaigns, we developed the experimental materials described above. A pretest was conducted to verify that the materials successfully reflected different levels of abstraction. One hundred participants (59% female; Mage = 36.66, SD = 11.55; 84% held a bachelor’s degree or above) were recruited from the Credamo platform and randomly assigned to either the abstract or concrete message framing condition. They were asked to imagine viewing the “Million Forest” charity project and reading the message presented by the online customer service agent used in the main experiment. Participants in the abstract and concrete framing conditions viewed the same content as those in the main experiment’s human communication agent–abstract, and human communication agent–concrete conditions, respectively, except that the agent’s identity (AI vs. human) was not revealed. Afterwards, participants responded to an open-ended question in which they were asked to list their thoughts about the charity project.
Following Hamilton and Thompson (Study 1) and Zhang et al. (Study 1), participants’ open-ended responses were coded into “abstract” and “concrete” categories by two independent coders as a measure of the construal level of the message [80,81]. The coders were PhD students specializing in consumer behavior and organizational behavior, who were blind to the study’s hypotheses and trained in construal level theory. Specifically, a response was coded as 1 in the abstract category if it focused on why the project was conducted, the outcomes of tree planting, or the broader advantages and implications of the project (e.g., “The project helps stop sand from spreading and creates green jobs for local communities”, “I find this project very meaningful”). A response was coded as 1 in the concrete category if it referred to how the project could be implemented or described the process of tree planting (e.g., “Organize and participate in tree-planting events, support environmental research projects, and provide regular updates on the reforested areas”). Responses that contained only objective descriptions without evaluative or procedural content were assigned a value of 0 for both abstract and concrete coding. Interrater reliability was excellent (ICC = 0.962). Following previous research [81,82], any discrepancies were resolved through discussion guided by the Linguistic Category Model (LCM) [83], taking into account the abstractness of verbs and the overall sentence meaning.

3.2. Results

Following Zhang et al. [81], for the pretest, a chi-square test showed that participants in the concrete message framing condition generated more concrete thoughts about the charity project than those in the abstract framing condition (32/50 vs. 19/50), χ2(1, N = 100) = 6.763, p = 0.016. Conversely, participants in the abstract message framing condition generated more abstract thoughts than those in the concrete framing condition (29/50 vs. 11/50), χ2(1, N = 100) = 13.500, p < 0.001. These results indicate that the experimental materials effectively manipulated the construal level of the messages.
To test the hypotheses, the study examined the interaction between communication agent type and message framing. In the analysis, agent type was coded as a binary independent variable (AI = 1, human = 0), and message framing was included as a moderator (1 = concrete, 0 = abstract). The dependent variable was participants’ trust in the charitable organization. A two-way ANOVA revealed a significant interaction effect between agent type and message framing on trust in the charitable organization (F(1, 273) = 9.924, p = 0.002, η2 = 0.035). However, neither main effect was statistically significant. More specifically, when the communication agent was an AI chatbot, participants reported significantly greater trust in the charitable organization when the message was framed concretely rather than abstractly (Mconcrete = 5.780, SD = 0.695 vs. Mabstract = 5.448, SD = 0.922; F(1, 136) = 5.734, p = 0.018, η2 = 0.040). In contrast, when the agent was a human, trust was slightly higher under abstract framing than under concrete framing (Mabstract = 5.647, SD = 0.792 vs. Mconcrete = 5.320, SD = 1.041; F(1, 137) = 4.370, p = 0.038, η2 = 0.031). Additionally, under concrete framing, participants in the AI condition reported higher trust than those in the human condition (Mai = 5.780, SD = 0.695 vs. Mhuman = 5.320, SD = 1.041; F(1, 139) = 8.953, p = 0.003, η2 = 0.057). Conversely, under abstract framing, trust did not differ significantly between AI and human agents (Mhuman = 5.448, SD = 0.922 vs. Mai = 5.647, SD = 0.792; F(1, 134) = 1.837, p = 0.178). These results support Hypothesis 1a.
A two-way ANOVA with donation intention as the dependent variable revealed a significant interaction between communication agent type and message framing (F(1, 273) = 6.252, p = 0.013, η2 = 0.022); however, neither main effect was significant. Follow-up analyses showed that, when the agent was an AI chatbot, participants reported higher donation intention under concrete framing than under abstract framing (Mconcrete = 5.951, SD = 0.738 vs. Mabstract = 5.590, SD = 0.949; F(1, 136) = 6.269, p = 0.013, η2 = 0.044). In contrast, when the agent was human, donation intention did not differ significantly between concrete and abstract framing conditions (Mconcrete = 5.479, SD = 1.275 vs. Mabstract = 5.717, SD = 0.957; F(1, 137) = 1.556, p = 0.214; see Figure 2). These results support Hypothesis 1b.

3.3. Study 1 Discussion

The results of Study 1 provide insights into how consumers respond to AI chatbots in online fundraising communication. Although AI chatbots are often perceived as lacking agency, value judgment, and goal-setting capacity, the findings indicate that they can foster greater trust and donation willingness when their communication adopts a concrete, detail-oriented style. This pattern suggests that consumers expect AI agents to excel in factual, data-driven interactions rather than in abstract or moral persuasion. Such results highlight the importance of psychological alignment between an AI agent’s perceived capabilities and the level of message construal in shaping persuasive effectiveness. Building on this logic, Study 2 further investigates why concrete messages from AI agents elicit greater trust, examining the mediating role of perceived message credibility and testing alternative explanations for robustness.

4. Study 2a

Study 2a aims to test Hypotheses 2a and 2b by examining the mediating role of perceived message credibility, while also ruling out alternative explanations involving emotional responses. Specifically, when an AI agent adopts an abstract, high-level construal in persuasive communication, consumers may experience a sense of mismatch, distrust, or psychological reactance. Prior research has shown that AI-generated appeals involving emotional or value-laden content can elicit moral discomfort or aversion in consumers [84]. Accordingly, abstract messages delivered by AI may evoke stronger negative emotions and weaker positive emotions, which could in turn undermine trust in the charitable organization and reduce donation intentions. To address this concern, a secondary aim of Study 2a is to empirically rule out positive and negative affect as alternative explanatory mechanisms.
The experiment employed a 2 (Communication agent: AI vs. Human) × 2 (Message framing: Abstract vs. Concrete) between-subjects design, with participants randomly assigned to one of the four conditions. A priori power analysis conducted using G*Power indicated that a total sample size of 274 participants would be required to detect a small-to-medium effect size (f = 0.17) with 80% statistical power at a significance level of 0.05. To ensure sufficient power, 305 participants were recruited from the Credamo platform. Following Arango et al. [85], participants were asked at the end of the experiment to recall both the identity of the customer service agent (AI vs. human) and the content of the charity project in order to exclude those who did not attend to the materials. A total of 299 participants passed the attention check. Among these, 69% were female, the average age was 30.48 years (SD = 7.646), and 87% held at least a bachelor’s degree. For the randomization check, the F-test results indicated no significant differences in participants’ demographic variables across experimental conditions; detailed results are presented in Table A2 in Appendix B. The data used in this study are available in Supplementary Materials.

4.1. Procedure, Materials and Pretest

At the beginning of the experiment, participants were asked to imagine that they were browsing a charity donation website and came across a philanthropic project called the “Hope 1 + 1 Program”, initiated by the XYZ Philanthropic Foundation. They were first shown a brief description of the project:
“Although the government has implemented preferential policies such as waiving miscellaneous and textbook fees for students from economically disadvantaged families in rural areas during compulsory education, the path to education remains challenging in underdeveloped regions. The ‘Hope 1 + 1 Program’ is a scholarship project launched by the XYZ Foundation. It aims to provide financial assistance to students from relatively poor families during the compulsory education stage, helping them to complete their education smoothly.”
After reading the project description, participants viewed a simulated chat interface between the foundation’s customer service and themselves. Specifically, under the abstract message framing conditions, participants in the AI agent condition viewed the following dialog with a customer service agent labeled as an AI (displaying an AI avatar and named, for example, “CharityBot”):
Q: Hello, I would like to know how much this charity project can help the students it supports.
A: This project helps relieve the financial burden on students and their families, enabling them to receive a complete education and giving them the opportunity to change their future and grow into individuals who make positive contributions to society.
(Round 1)
Q: Okay, but I’m a bit concerned about whether the use of donations is truly transparent and effective.
A: Every penny you donate represents a kindness toward underprivileged children. We honor this goodwill by ensuring transparent fund management and by reporting on project outcomes, thereby contributing to an educational system that brings hope to more children.
(Round 2)
Q: I’m still a bit worried.
A: We fully understand your concerns. Please rest assured that we strive to earn trust through honesty and professionalism. We continuously enhance financial transparency and project management processes so that every participant can witness the long-term value this project creates in promoting education and social equity. Your donation will become a powerful force in changing the lives of students in need.
(Round 3)
In the human agent condition, the content of the dialog remained the same; however, the avatar was changed to a human profile picture, and a typing indicator (“The representative is typing …”) signaled that the agent was a human employee of the foundation.
In the concrete message framing conditions, the manipulation of the communication agent was identical to that described above. Participants either interacted with an AI agent (with an AI avatar and label) or a human agent (with a human profile picture and indicator message signaling human involvement). The content of the dialog, however, was revised to reflect concrete details:
Q: Hello, I’d like to know how much this charity project can help the students it supports.
A: This project helps students cover essential expenses related to living and studying—such as accommodation and meal costs—so they won’t be forced to drop out of school due to financial difficulties.
(Round 1)
Q: Okay, but I’m concerned about whether the donations will actually be used in a transparent and effective way.
A: We ensure that all financial records and oversight procedures related to the project are fully auditable. Every penny you donate is specifically allocated to support students’ daily and educational needs. We regularly report the names of sponsored students and share progress updates, including detailed accounts of what each donation has been used to purchase—for example, school supplies or living essentials.
(Round 2)
Q: I’m still a bit worried.
A: If you remain concerned, you can visit the project’s official website at any time to access our annual reports and real-time funding feedback. We also provide verifiable data on sponsored students. Our staff regularly reviews fund usage and will update you via email or text message, including the number of students supported and a breakdown of how each donation was spent.
(Round 3)
Participants’ perceived credibility of the communication source was measured using a modified version of the source credibility scale [86]. Specifically, we averaged the scores of trustworthiness and expertise dimensions to represent source credibility. The attractiveness dimension was excluded as it primarily captures characteristics such as physical appearance and sex appeal, which are not applicable to the AI chatbot context of the present research. Sample items included “To what extent do you feel this AI/human agent is unreliable/reliable?” and “dishonest/honest?” (1 = not at all; 7 = very much; Cronbach’s α = 0.923).
Participants’ emotional responses were assessed using a 7-item scale developed by Di Muro and Murray [87]. This scale was selected over alternatives such as PANAS because its items measure general affective states rather than discrete and specific emotions, making it more suitable for contexts involving interaction with AI in prosocial appeals. Additionally, the shorter length of the scale helps to reduce participant fatigue. Four items assessed the intensity of positive emotions (e.g., “To what extent did you feel positive emotions?”; α = 0.926), and three items measured the intensity of negative emotions (α = 0.851). Participants’ trust in the charitable organization (α = 0.912) and willingness to donate (α = 0.885) were measured using the same scale used in Study 1 [78,79]. At the end of the experiment, demographic information was collected.
The experimental materials for Study 2a were designed in the same way as those in Study 1. A pretest was conducted to assess the validity of the materials. Ninety-nine participants (58.6% female; Mage = 36.530, SD = 11.579; 80.8% held a bachelor’s degree or above) were recruited from the Credamo platform and randomly assigned to either the abstract or concrete message framing condition. They were then presented with the Water Project description and screenshots of the dialog with the online customer service agent used in the main experiment. Participants in the abstract and concrete framing conditions viewed the same dialog content as those in the main experiment’s human communication agent–abstract and human communication agent–concrete conditions, respectively, but the agent’s identity (AI vs. human) was not disclosed. Participants were asked to provide an open-ended response describing their thoughts about the charity project. Two independent coders then coded the participants’ responses, following the same procedure and rules as in Study 1’s pretest. Interrater reliability was excellent (ICC = 0.920), and any discrepancies were resolved through discussion to produce a final set of coding results.

4.2. Results

For the pretest, a chi-square test revealed that participants in the concrete message framing condition had more concrete thoughts about the charity project (e.g., “Improve the donation system and process”, “Learning from successful charity projects and establishing relevant regulations”) compared to participants in the abstract framing condition (35/50 vs. 15/49), χ2(1, N = 99) = 15.359, p < 0.001. Conversely, participants in the abstract message framing condition had more abstract thoughts about the project (e.g., “We should cultivate compassion and social responsibility,” “It is important for caring members of society to help others experience social warmth and inclusion”) than participants in the concrete framing condition (29/49 vs. 12/50), χ2(1, N = 99) = 12.626, p = 0.001. These results indicate that the experimental materials successfully manipulated the construal level of the messages.
To examine the interaction effect between communication agent type and message framing, a two-way ANOVA was conducted with agent type (AI = 1, human = 0) as the independent variable and message framing (concrete = 1, abstract = 0) as the moderator. The dependent variable was trust in the charitable organization. The analysis revealed a significant interaction effect on organizational trust (F(1, 295) = 7.332, p = 0.007, η2 = 0.017), whereas the main effects of communication agent and message framing on trust were both not significant. Specifically, when the agent was AI, participants in the concrete framing condition reported significantly higher trust than those in the abstract condition (Mconcrete = 5.840, SD = 0.736 vs. Mabstract = 4.973, SD = 1.377; F(1, 148) = 28.167, p < 0.001, η2 = 0.135). Conversely, when the agent was human, trust did not differ significantly between concrete and abstract framing conditions (Mabstract = 5.215, SD = 1.153 vs. Mconcrete = 5.382, SD = 1.107; F(1, 147) = 0.814, p = 0.369). Additionally, under concrete framing, participants in the AI condition reported higher trust than those in the human condition (Mai = 5.840, SD = 0.736 vs. Mhuman = 5.382, SD = 1.107; F(1, 149) = 8.953, p = 0.003, η2 = 0.057), whereas under abstract framing, trust did not differ between AI and human agents (Mhuman = 5.215, SD = 1.153 vs. Mai = 4.973, SD = 1.377; F(1, 146) = 1.332, p = 0.250). These results provided further support for Hypothesis 1a.
Another two-way ANOVA was conducted with donation intention as the dependent variable. Results indicated a significant interaction effect between communication agent and message framing (F(1, 295) = 4.219, p = 0.041, η2 = 0.014), while the main effects were not significant. When the communication agent was AI, donation intention was significantly higher under the concrete communication frame compared to the abstract frame (Mconcrete = 5.940, SD = 0.809 vs. Mabstract = 5.147, SD = 1.502; F(1, 148) = 16.219, p < 0.001, η2 = 0.099). When the agent was human, the difference in donation intention between different message framing types was not significant (Mconcrete = 5.308, SD = 1.319 vs. Mabstract = 5.507, SD = 1.274; F(1, 147) = 0.872, p = 0.352). The findings provide corroborating evidence for Hypothesis 1b.
A moderated mediation analysis was conducted using the PROCESS macro (Model 7), with message framing as the independent variable, perceived source credibility as the mediator, trust in the charitable organization as the dependent variable, and communication agent type as the moderator. Bootstrapping with 5000 resamples was employed. The index of moderated mediation was significant (index = 0.578, SE = 0.220, 95% CI [0.140, 1.026]), indicating that the indirect effect of message framing on trust through perceived source credibility depended on the type of communication agent. Specifically, there was a significant interaction between communication agent and message framing on perceived source credibility (b = 0.570, SE = 0.218, 95% CI [0.140, 1.000]). For the AI agent condition, participants exposed to concrete message framing reported significantly higher perceived source credibility than those in the abstract condition (b = 0.680, SE = 0.154, 95% CI [0.377, 0.983]). In contrast, for the human agent condition, message framing did not significantly affect perceived source credibility (b = 0.110, SE = 0.155, 95% CI [−0.195, 0.414]). Consequently, when the agent was AI, concrete framing increased trust in the charitable organization via enhanced perceived source credibility (indirect effect b = 0.690, SE = 0.161, 95% CI [0.391, 1.026]). However, this indirect effect was not significant when the agent was human (indirect effect b = 0.111, SE = 0.157, 95% CI [−0.198, 0.426]). These results provide preliminary support for Hypothesis 2a.
Using the same moderated mediation model, with message framing as the independent variable, perceived source credibility as the mediator, donation intention as the dependent variable, and communication agent as the moderator, the index of moderated mediation was again significant (index = 0.611, SE = 0.235, 95% CI [0.151, 1.065]). Specifically, when the communication agent was AI, the concrete message framing significantly increased donation intention through enhancing perceived source credibility (indirect effect b = 0.729, SE = 0.169, 95% CI [0.409, 1.071]). Conversely, when the communication agent was human, this indirect effect was not significant (indirect effect b = 0.118, SE = 0.168, 95% CI [−0.210, 0.447]). These results suggest that the effectiveness of concrete message framing in promoting donation intention operates through increased perceived credibility of the communication source, but only when the agent is AI rather than human. These findings provide preliminary support for Hypothesis 2b.
The alternative explanations of positive and negative emotions were then tested. Using PROCESS Model 7, perceived source credibility, positive emotion, and negative emotion were included as parallel mediators to examine moderated mediation effects, with trust in the charitable organization as the dependent variable. Results showed that, even after controlling for positive and negative emotions, the moderated mediation effect via perceived source credibility remained significant (index = 0.393; SE = 0.163; 95% CI = [0.093, 0.741]).
Positive emotion also exhibited a significant moderated mediation effect (index = 0.253; SE = 0.105; 95% CI = [0.066, 0.485]). Specifically, there was a significant interaction effect between communication agent type and message framing on participants’ positive emotional responses (b = 1.031; SE = 0.368; 95% CI = [0.307, 1.756]). When the agent was AI, participants reported higher positive emotion under concrete framing than abstract framing (b = 1.060; SE = 0.260; 95% CI = [0.549, 1.571]). In contrast, when the agent was human, message framing did not significantly affect positive emotion (b = 0.029; SE = 0.261; 95% CI = [–0.485, 0.542]). Consequently, when the agent was AI, concrete framing enhanced trust in the charitable organization via increased positive emotion (b = 0.260; SE = 0.075; 95% CI = [0.132, 0.423]), whereas this effect was not significant for human agents. Similarly, when donation intention was used as the dependent variable, positive emotion also showed a significant moderated mediation effect (index = 0.233; SE = 0.101; 95% CI = [0.059, 0.455]). Notably, the moderated mediation effect via perceived source credibility remained significant in this model as well (index = 0.505; SE = 0.173; 95% CI = [0.106, 0.784]).
Regarding negative emotion, the interaction between communication agent type and message framing did not significantly influence participants’ negative emotional responses. As a result, negative emotion did not function as a significant mediator or moderator for either trust in the charitable organization or donation intention.
In sum, although the moderated mediation effect of positive emotion was significant, the moderated mediation via perceived source credibility remained significant even after controlling for this emotional pathway. This suggests that the proposed mediation through perceived source credibility accounts for additional variance in the relationship between the independent variables and the dependent variables beyond what is explained by positive emotion alone. The two mediating mechanisms are not mutually exclusive but complementary; thus, positive emotion does not replace the mediating role of perceived source credibility. Overall, these findings suggest that perceived source credibility mediates the interactive effect of communication agent identity (AI vs. human) and message framing (abstract vs. concrete) on consumer trust in charitable organizations and donation intention (see Figure 3).
One potential explanation for the role of positive emotion is that higher perceived credibility may elicit stronger positive emotional responses, which in turn promote prosocial behavior. To explore this possibility, we conducted an additional supplementary analysis in which agent type served as the independent variable, message framing as the moderator, source credibility as the first mediator, positive emotion as the second mediator, and trust in the charitable organization as the dependent variable. The results revealed a significant sequential mediation effect through both credibility and positive emotion (index = 0.179; 95% CI = [0.042, 0.353]). These findings suggest that positive emotion provides a more nuanced explanation of the relationship between perceived source credibility and consumer trust, complementing—rather than replacing—the proposed credibility pathway. Taken together, the results provide support for Hypotheses 2a and 2b.

4.3. Study 2a Discussion

Building on the initial evidence from Study 1, Study 2a sheds light on why concrete message framing is particularly persuasive when delivered by an AI chatbot. The findings reveal that perceived message credibility serves as the key psychological bridge linking the communication agent to consumer trust and donation willingness. When an AI chatbot communicates with concrete, detail-oriented language, consumers perceive its message as more credible—consistent with their expectations that AI excels at delivering factual, data-based information. This enhanced credibility, in turn, strengthens their trust in the charitable organization and their willingness to donate.
Importantly, these results also clarify what does not drive the observed effects. Although emotional responses—especially positive affect—can sometimes promote trust in charitable contexts, our data show that the effect of AI message framing is not merely emotional. Even after accounting for both positive and negative affect, the mediating pathway through perceived credibility remains robust. This finding reinforces the notion that AI persuasion operates less through emotion and more through cognitive assessments of reliability and fit.
In essence, Study 2a contributes a more nuanced understanding of the “AI persuasion paradox”: while AI agents may seem cold or impersonal, they can gain consumers’ trust by communicating in ways that align with their perceived strengths—precision, clarity, and concreteness. Building on these insights, Study 2b extends this investigation to a Western cultural context to examine whether the credibility-based mechanism holds across different cultural perceptions of AI.

5. Study 2b

Study 2b aims to test Hypotheses 2a and 2b in a different cultural context, thereby increasing sample heterogeneity and enhancing the robustness and generalizability of the findings. Cultural background may influence the proposed interaction effect between the AI communication agent and message framing for two reasons.
First, Western consumers generally endorse an independent self-construal [88], which predisposes them to prefer information framed at a high construal level—that is, in an abstract format [89]. In contrast, Eastern consumers typically endorse an interdependent self-construal, which makes them more receptive to information presented at a low construal level, or in a concrete format. Consequently, the proposed effect—that AI is more effective when paired with concrete message framing—might be attenuated among Western consumers due to their preference for abstract frames.
Second, cultural traditions and beliefs such as animism may make Eastern consumers more accepting of AI and robotic technologies [90]. Western consumers, who tend to hold more cautious or negative attitudes toward AI, may therefore be less influenced by concrete message framing in interactions with AI agents.
Nonetheless, if the underlying assumption of this study—that people generally perceive AI as lacking subjective awareness and agency, and therefore as low-level construal agents—transcends cultural boundaries, the findings may still hold among Western consumers. Study 2b aims to examine whether the effects observed in Study 2a generalize across cultural contexts.
The experiment employed a 2 (Communication agent: AI vs. Human) × 2 (Message framing: Abstract vs. Concrete) between-subjects design, with participants randomly assigned to one of the four conditions. To ensure adequate statistical power, 284 participants from the UK were recruited via the Prolific platform, which is widely accepted in psychological research and noted for its high data quality relative to similar platforms [91]. The study also employed the survey tool provided by the Credamo platform, which features an automatic random assignment function, to ensure the internal validity of the study. Consistent with Study 2a, participants who failed to correctly identify the type of customer service agent were excluded from the analysis, leaving 281 participants who passed the attention check. Among them, 48% were female, the average age was 43.59 years (SD = 13.025), and 57.3% held at least a bachelor’s degree. For the randomization check, the F-test results indicated no significant differences in participants’ demographic variables across experimental conditions; detailed results are presented in Table A2 in Appendix B. The data used in this study are available in Supplementary Materials.

5.1. Procedure, Materials and Pretest

At the beginning of the experiment, participants were asked to imagine that they were browsing a charity donation website and came across a philanthropic project called “The Water Project”. They were first shown a brief description of the project:
“In many communities in sub-Saharan Africa, people do not have regular access to clean water. Children often spend a lot of time collecting water for daily use, which can affect their everyday life. The Water Project is an organization that works in these communities to provide safe water. The project supports people so that they can meet their daily water needs more easily.”
After reading the project description, participants viewed a simulated chat interface between the foundation’s customer service and themselves. Specifically, under the abstract message framing conditions, participants in the AI (human) agent condition viewed the following dialog with a customer service agent labeled as an AI:
Customer Service: Hi there! I’m the AI (human) representative from The Water Project, and I’d love to share a bit about what we do.
Q: Hello! I’m curious how exactly does this project help local communities?
Customer Service: Great question. Access to clean water is really the foundation for long-term community development. It means children can attend school, families can stay healthy, and people can build their own future with dignity and hope. Clean water empowers entire communities to thrive.
Q: That sounds inspiring, but I’m wondering does my small donation actually make a difference?
Customer Service: Absolutely. Every contribution helps create lasting change. Your donation joins others to support local teams who bring safe water and renewed opportunity to families. It’s not just about water—it’s about restoring hope and unlocking human potential. Together, we can make a real difference.
Similarly to Study 2a, in the human agent condition, the dialog content remained the same except the greeting part; however, the avatar was changed to a human profile picture, and a typing indicator (“The representative is typing …”) signaled that the agent was a human employee of the foundation.
In the concrete message framing conditions, the manipulation of the communication agent remained the same as described above. The content of the dialog was revised to reflect concrete details:
Customer Service: Hi there! I’m the AI (human) representative from The Water Project, I’d be happy to explain how your donation works.
Q: Hello! I’d like to know how exactly this project helps local communities.
Customer Service: Sure! Your donation supports local engineers and builders in Kenya, Uganda, and Sierra Leone. They drill wells, install hand pumps, and repair existing water systems. Once the work is done, families can collect clean water right in their village.
Q: That sounds good, but I’m wondering how do I know the money is being used effectively?
Customer Service: That’s an important question. Each project has its own online report with pictures, GPS coordinates, and maintenance updates, so you can see exactly how your donation is being used. Your support directly funds a specific water site that provides safe water for daily use.
Participants’ perceived credibility of the communication source (α = 0.965), trust in the charitable organization (α = 0.964) and willingness to donate (α = 0.955) were measured by the same scale used in Study 2a. At the end of the experiment, demographic information was collected.
A pretest was conducted to verify the validity of the experimental materials. Ninety-seven participants were recruited from the Prolific platform (47% female; Mage = 44.64, SD = 12.892; 55.7% held a bachelor’s degree or above) and were randomly assigned to either the abstract or concrete message framing condition. The pretest procedure was identical to that used in Study 1 and Study 2a. After viewing the dialog with the representative of the charitable organization, participants responded to an open-ended question, listing their thoughts about The Water Project. Two independent coders then coded the responses. Inter-rater reliability was excellent (ICC = 0.906), and discrepancies were resolved through discussion to produce a final coding dataset.

5.2. Results

For the pretest, a chi-square test revealed that participants in the concrete message framing condition generated significantly more concrete thoughts about the charity project (e.g., “What percentage of donation goes to the projects overseas?”; “How does the project decide which countries the help is directed to?”) than participants in the abstract framing condition (35/52 vs. 8/45), χ2(1, N = 97) = 23.981, p < 0.001. Conversely, participants in the abstract message framing condition produced significantly more abstract thoughts (e.g., “It will help a lot of people and prevent unnecessary illnesses”; “It will enable children to go to school without the fear of being unwell”) compared to participants in the concrete condition (34/45 vs. 13/52), χ2(1, N = 97) = 24.686, p < 0.001. These results indicate that the experimental materials successfully manipulated the intended level of construal.
To examine the interaction effect between communication agent type and message framing, a two-way ANOVA was conducted with agent type (AI = 1, human = 0) as the independent variable and message framing (concrete = 1, abstract = 0) as the moderator. The dependent variable was trust in the charitable organization. The analysis revealed a significant interaction effect on organizational trust (F(1, 277) = 9.971, p = 0.002, η2 = 0.035), whereas the main effects of communication agent and message framing on trust were both not significant. Specifically, when the communication agent was AI, participants in the concrete message framing condition reported significantly higher trust in the charitable organization than those in the abstract framing condition (Mconcrete = 5.033, SD = 1.064 vs. Mabstract = 4.223, SD = 1.377; F(1, 138) = 22.937, p < 0.001, η2 = 0.099). Conversely, when the communication agent was human, no significant difference in trust was observed between the concrete and abstract message framing conditions (Mconcrete = 4.586, SD = 1.324 vs. Mabstract = 4.704, SD = 1.135; F(1, 139) = 0.326, p = 0.569). Additionally, when adopting concrete message framing, participants in the AI agent condition reported significantly higher trust in the charitable organization than those in the human agent condition (Mai = 5.033, SD = 1.064 vs. Mhuman = 4.586, SD = 1.324; F(1, 138) = 4.857, p = 0.029, η2 = 0.034). Conversely, when adopting abstract message framing, trust was significantly higher for those who communicated with a human agent than those in the AI agent condition (Mhuman = 4.704, SD = 1.324 vs. Mai = 4.223, SD = 1.135; F(1, 139) = 5.118, p = 0.025, η2 = 0.036; see Figure 4). Overall, these results provided further support for Hypothesis 1a.
Another two-way ANOVA was conducted with donation intention as the dependent variable. Results indicated a significant interaction effect between communication agent and message framing (F(1, 277) = 5.563, p = 0.019, η2 = 0.020), while the main effects were not significant. When the communication agent was AI, donation intention was significantly higher under the concrete communication frame compared to the abstract frame (Mconcrete = 4.293, SD = 1.557 vs. Mabstract = 3.521, SD = 1.607; F(1, 138) = 8.320, p = 0.005, η2 = 0.057). When the agent was human, the difference in donation intention between different message framing types was not significant (Mconcrete = 5.308, SD = 1.319 vs. Mabstract = 5.507, SD = 1.274; F(1, 139) = 0.872, p = 0.352). The findings provide corroborating evidence for Hypothesis 1b.
A moderated mediation analysis was conducted using the PROCESS macro (Model 7) with message framing as the independent variable, perceived source credibility as the mediator, trust in the charitable organization as the dependent variable, and communication agent type as the moderator. Bootstrapping resampling was set at 5000 iterations. The index of moderated mediation was significant (index = 0.489, SE = 0.217, 95% CI [0.080, 0.934]), indicating that the indirect effect of message framing on organizational trust through perceived source credibility was contingent on the type of communication agent. Specifically, there was a significant interaction between communication agent and message framing on perceived source credibility (b = 0.723, SE = 0.294, 95% CI [0.144, 1.301]). For the AI agent condition, participants exposed to the concrete communication framing reported significantly higher perceived source credibility compared to the abstract framing condition (b = 0.796, SE = 0.208, 95% CI [0.386, 1.206]). In contrast, for the human agent condition, the message framing type had no significant effect on perceived source credibility (b = 0.073, SE = 0.208, 95% CI [−0.335, 0.482]). Consequently, when the communication agent was AI, the concrete message frame significantly increased trust in the charitable organization via enhanced perceived source credibility (indirect effect b = 0.533, SE = 0.167, 95% CI [0.229, 0.876]). However, this indirect effect was not significant when the communication agent was human (indirect effect b = 0.049, SE = 0.133, 95% CI [−0.218, 0.310]). Therefore, Hypothesis 2a received further support.
Using the same moderated mediation model, with message framing as the independent variable, perceived source credibility as the mediator, donation intention as the dependent variable, and communication agent as the moderator, the index of moderated mediation was again significant (index = 0.496, SE = 0.224, 95% CI [0.088, 0.977]). Specifically, when the communication agent was AI, the concrete message framing significantly increased donation intention through enhancing perceived source credibility (indirect effect b = 0.547, SE = 0.177, 95% CI [0.227, 0.915]). Conversely, when the communication agent was human, this indirect effect was not significant (indirect effect b = 0.050, SE = 0.139, 95% CI [−0.222, 0.317]). These findings suggest that the effectiveness of concrete communication framing in promoting donation intention operates via increased perceived credibility of the communication source, but only when the agent is an AI rather than a human. The above evidence provides preliminary support for Hypothesis 2b.

5.3. Study 2b Discussion

Study 2b extended the investigation to a Western cultural context to examine whether the observed AI–message fit effect is culturally specific or universal. From a cultural psychology perspective, this effect might weaken among Western consumers, who tend to exhibit independent self-construals, prefer abstract information, and display greater skepticism toward AI technologies. However, the results showed that even among Western participants, AI chatbots communicating with concrete, detail-oriented messages continued to generate higher perceived credibility, stronger trust in the charitable organization, and greater donation intentions.
These findings suggest that the preference for concreteness in AI-mediated communication transcends cultural boundaries. A core psychological mechanism—people’s lay belief that AI lacks subjective awareness and autonomous agency—appears to override cultural variations in self-construal and attitudes toward AI. Practically, the results indicate that in Western contexts where abstract, value-laden appeals are common, AI-delivered messages are more effective when emphasizing concrete information, clear procedures, and verifiable evidence to maintain credibility and trust. Building on these insights, Study 3 explores whether anthropomorphic design can alter these perceptions by making AI agents appear more capable of abstract, value-based communication.

6. Study 3

While Studies 1 and 2 demonstrated that AI agents are generally more persuasive when delivering concrete messages, an open question remained: Would this pattern persist when the AI appears more human-like? Therefore, the purpose of Study 3 was to test Hypotheses 3a and 3b, which propose that the anthropomorphism of an AI chatbot moderates the mediating role of perceived source credibility in the interactive effect of communication agent type (AI vs. human) and message framing (concrete vs. abstract) on consumer trust in charitable organizations and their willingness to donate. Specifically, compared to low-anthropomorphism AI agents, highly anthropomorphized AI agents are expected to elicit greater trust and donation intention from consumers when delivering abstract messages.
This study employed a 3 (communication agent type: low-anthropomorphism AI vs. high-anthropomorphism AI vs. human) × 2 (message framing: abstract vs. concrete) between-subjects design. Participants were randomly assigned to one of the six conditions. According to a G*Power analysis, assuming a significance level of 0.05 and a small-to-moderate effect size (f = 0.17), a minimum sample size of 337 was required to achieve 80% statistical power. A total of 415 participants were recruited through an online panel, and 399 passed the attention check. Among them, 68% were female, the average age was 31.27 years (SD = 7.858), and 86.7% held at least a bachelor’s degree. For the randomization check, the F-test results indicated no significant differences in participants’ demographic variables across experimental conditions; detailed results are presented in Table A2 in Appendix B. The data used in this study are available in Supplementary Materials.

6.1. Procedure and Materials

Study 3 used the same scenario and dialog materials as Study 2a, whose message framing manipulation had already been validated in the pretest of Study 2a. At the beginning of the experiment, participants were asked to imagine themselves browsing a charitable donation website, where they came across a project named the “Hope 1 + 1 Initiative” launched by a charitable organization called “XYZ Foundation.” They then read the same description of the project as in Study 2a. Next, participants viewed a simulated conversation window between a customer service agent and themselves. The content of the conversation was framed using high (abstract) or low (concrete) construal levels, consistent with the materials from Study 2a. The only variation across conditions was the appearance of the agent: in the AI agent condition, participants saw a mechanical GPT-style avatar; in the human agent condition, a human avatar was displayed; and in the anthropomorphized AI condition, participants saw a human avatar labeled “AI Customer Service.”
Next, as in prior studies, participants rated the perceived credibility of the information using two dimensions from Ohanian’s three-dimensional credibility scale [86]. Example items included “To what extent do you feel this AI/human customer service agent is very unreliable/very reliable; very dishonest/very honest?” (1 = Not at all, 7 = Very much; α = 0.914). Attention checks followed, using two items adapted from Arango et al., asking whether participants correctly recalled whether the customer service agent was an AI or a human, and whether they correctly remembered the content of the charity project [85].
Trust in the charitable organization was assessed by the same scale used in Study 1 (α = 0.845) [78]. Subsequently, we measured participants’ willingness to donate, using an adapted version of the participation intention scale (α = 0.749) [79]. Finally, participants reported their demographic information.

6.2. Results

To test Hypotheses 3a and 3b, this study used the human agent condition as the reference group and created two dummy variables: one representing agent anthropomorphism (the anthropomorphized AI condition = 1, the AI chatbot that lacks anthropomorphic cues condition and the human condition = 0), and the other representing agent non-anthropomorphism (the AI chatbot that lacks anthropomorphic cues condition = 1, the anthropomorphized AI condition and the human condition = 0). The message framing type was coded such that abstract information equals 0 and concrete information was coded as 1.
This study first examined the effects of the agent type on the dependent variable. Using trust in the charitable organization as the dependent variable, we entered the messages framing type, the anthropomorphism and non-anthropomorphism dummy variables, as well as their interactions with the communication agent type into a regression model. The analysis revealed a significant interaction between the non-anthropomorphism dummy variable and the message framing (b = 0.891, β = 0.368, t = 4.136, p < 0.001). In addition, the non-anthropomorphism dummy variable showed a significant negative direct effect on trust (b = −0.464, β = −0.240, t = −3.036, p = 0.003). By contrast, the interaction between the anthropomorphism dummy variable and the communication frame was not significant (b = 0.035, β = 0.015, t = 0.165, p = 0.869). The direct effects of the anthropomorphism dummy variable (b = 0.129, β = 0.068, t = 0.865, p = 0.388) and the message framing (b = −0.139, β = −0.077, t = −0.920, p = 0.358) on trust were also non-significant. These findings suggest that when the communication agent is an AI chatbot that lacks anthropomorphic cues, the type of message framing significantly influences consumers’ trust in the charitable organization. However, when the agent is an anthropomorphized AI or a human representative, the message framing has no significant effect on trust.
Using donation intention as the dependent variable, we conducted a regression analysis in which message framing, anthropomorphism, non-anthropomorphism, and their interaction terms were included as independent variables. The results revealed a significant interaction effect between non-anthropomorphism and message framing on donation intention (b = 0.506, β = 0.201, t = 2.209, p = 0.026). In addition, the main effect of non-anthropomorphism on donation intention was also significant (b = −0.321, β = −0.160, t = −2.000, p = 0.046). In contrast, the interaction between anthropomorphism and message framing was not significant (b = 0.163, β = 0.233, t = 0.700, p = 0.484). Similarly, neither the main effect of anthropomorphism (b = 0.248, β = 0.126, t = 1.584, p = 0.114) nor the main effect of message framing (b = 0.031, β = 0.017, t = 0.196, p = 0.845) reached statistical significance. Taken together, these findings suggest that message framing significantly influences both consumer trust in charitable organizations and donation intention only when the communication agent is an AI chatbot that lacks anthropomorphic cues. When the agent is either a human or an anthropomorphized AI chatbot, message framing has no significant effect on these outcomes.
Further ANOVA analyses revealed that, when the AI chatbot lacks anthropomorphic cues, participants exhibited significantly higher trust in the charitable organization when the message was framed concretely rather than abstractly (Mconcrete = 5.924, SD = 0.643 vs. Mabstract = 5.135, SD = 1.248; F(1, 128) = 20.703, p < 0.001, η2 = 0.139). Similarly, their donation intention was also significantly stronger under concrete message framing (Mconcrete = 5.939, SD = 0.642 vs. Mabstract = 5.367, SD = 1.304; F(1, 128) = 18.048, p = 0.002, η2 = 0.074). In contrast, for the anthropomorphized AI condition, there was no significant difference in participants’ trust in the charitable organization between the concrete and abstract message framing conditions (Mconcrete = 5.590, SD = 0.740 vs. Mabstract = 5.783, SD = 0.734; F(1, 132) = 2.292, p = 0.132). Interestingly, participants demonstrated greater donation intention when the message was framed abstractly rather than concretely (Mabstract = 6.007, SD = 0.725 vs. Mconcrete = 5.561, SD = 0.831; F(1, 132) = 7.839, p = 0.006, η2 = 0.056). Under the human agent condition, message framing had no significant effect on either participants’ trust in the charitable organization or their donation intention (Mabstract = 5.599, SD = 0.802 vs. Mconcrete = 5.460, SD = 0.937; F(1, 133) = 0.866, p = 0.354). These findings provide preliminary support for Hypotheses 3a and 3b.
This study further examined differences in perceived source credibility across experimental conditions. Regression analyses revealed a significant interaction effect between the dummy variable for the non-anthropomorphized AI group and message framing on perceived source credibility (b = 0.839, β = 0.379, t = 4.317, p < 0.001). The direct effect of the non-anthropomorphized AI group was also significant (b = −0.438, β = −0.250, t = −3.190, p = 0.002). In contrast, the interaction between the anthropomorphized AI group and message framing was not significant (b = −0.095, β = −0.043, t = −0.494, p = 0.621). Neither the direct effect of the anthropomorphized AI group nor the main effect of message framing was statistically significant. These results suggest that message framing significantly influenced consumers’ perceived source credibility only when the customer service agent was an AI chatbot that lacks anthropomorphic cues. No such effect was observed under the anthropomorphized AI or human customer service conditions. Specifically, follow-up ANOVA results showed that, when the AI chatbot lacks anthropomorphic cues, participants exposed to concrete message framing perceived the source as significantly more credible (Mconcrete = 5.885, SD = 0.495) than those exposed to abstract message framing (Mabstract = 5.136, SD = 1.116; F(1, 128) = 24.718, p < 0.001, η2 = 0.162).
This study next conducted a moderated mediation analysis using Model 7 of the PROCESS macro in SPSS 25. In this model, message framing was specified as the independent variable, perceived source credibility as the mediator, and the dummy variable for the non-anthropomorphized AI agent as the moderator. Trust in the charitable organization served as the dependent variable, and the dummy variable for the anthropomorphized AI group was included as a covariate. The number of bootstrap resamples was set to 5000. The results showed that while the main effect of message framing on perceived source credibility was not significant, the interaction between message framing and the non-anthropomorphized AI dummy variable was positively significant (b = 0.887, SE = 0.169, 95% CI = [0.555, 1.219]). The conditional indirect effect of message framing on trust in the charitable organization via perceived source credibility was significant when the agent was an AI chatbot that lacks anthropomorphic cues (b = 0.613, SE = 0.137, 95% CI = [0.362, 0.895]), but not significant when the agent was either a human or an anthropomorphized AI (b = −0.113, SE = 0.076, 95% CI = [−0.270, 0.034]). The index of moderated mediation was significant (index = 0.726, SE = 0.159, 95% CI = [0.434, 1.053]). Setting donation intention as the dependent variable and applying the same moderated mediation model, the results similarly indicated a significant conditional indirect effect of message framing through perceived source credibility when the agent was an AI chatbot that lacks anthropomorphic cues (b = 0.555, SE = 0.131, 95% CI = [0.372, 0.830]), but not when the agent was either a human or an anthropomorphized AI (b = −0.102, SE = 0.068, 95% CI = [−0.244, 0.029]). Again, the index of moderated mediation was significant (index = 0.658, SE = 0.151, 95% CI = [0.395, 0.972]).
These analyses indicate that, when the AI chatbot lacked anthropomorphic cues, the type of message framing significantly influenced participants’ perceived source credibility, which in turn affected their trust in the charitable organization and their willingness to donate. However, for anthropomorphized AI agents and human agents, message framing did not significantly impact perceived source credibility. Taken together, these findings provide support for Hypotheses 3a and 3b, suggesting that the level of AI anthropomorphism moderates the interactive effect of message framing and agent identity on perceived organizational trustworthiness and donation intentions (see Figure 5).

6.3. Study 3 Discussion

Building on the prior studies, Study 3 further explored when and why the communication effectiveness of AI chatbots depends on their level of anthropomorphism. The results revealed that when the AI chatbot lacked anthropomorphic cues, concrete messages significantly boosted perceived source credibility, which in turn enhanced consumers’ trust in the charitable organization and willingness to donate. In contrast, when the AI appeared human-like, this framing effect disappeared.
This finding shows that when the chatbot looked mechanical, consumers relied on informational cues—how detailed and concrete the message was—to decide whether to trust it. However, once the chatbot appeared human-like, people shifted their attention from what was said to who was saying it. Anthropomorphic features activated social trust, leading consumers to evaluate the source as if it were a human representative rather than an algorithm. The following section discusses the theoretical and practical implications of the above findings.

7. General Discussion

Although AI chatbots are increasingly adopted in charitable fundraising, their effectiveness depends not only on automation or efficiency but also on how they communicate with potential donors. Prior research provides limited guidance on which communication strategies enable AI agents to enhance persuasion, mitigate algorithm aversion, and foster trust in nonprofit organizations. The present research addresses this gap through three complementary experiments.
Across these studies, a consistent pattern emerged. When AI chatbots communicated using concrete, data-driven messages, donors reported higher trust in the charitable organization and greater willingness to donate than when abstract, value-oriented framing was employed. In contrast, when the same messages were delivered by a human representative, this distinction largely disappeared. This divergence arises because AI chatbots are generally perceived as algorithmic and precise but limited in understanding value. Consequently, donors respond more favorably when AI agents provide factual and procedural information rather than abstract moral appeals that appear inconsistent with their perceived capabilities. The robustness of this pattern across both Eastern and Western cultural contexts indicates that the preference for concreteness in AI-mediated communication reflects a fundamental and cross-cultural psychological mechanism. Study 3 further examined how enhancing the anthropomorphic design of AI chatbots may alter this dynamic. When human-like cues such as facial resemblance were introduced, abstract messages gained credibility and became more persuasive, leading to greater consumer trust and donation intentions. Table 1 summarized the hypothesis testing results from Study 1 to Study 3. Taken together, these findings have important implications for interactive marketing research and practitioners.

7.1. Theoretical Implications

This study makes several key theoretical contributions. First, it extends the literature on consumer–AI chatbot interaction in online fundraising contexts by introducing CLT to examine how message framing influences consumer trust and donation behavior. Prior studies have noted general resistance to AI-mediated persuasion [92]. Some studies suggest that this resistance can be mitigated when AI agents adopt emotional expressions (e.g., emojis [93]) or socially oriented language (e.g., using endearments such as “honey” or “dear”, [28]). However, little attention has been given to whether the abstractness or concreteness of message framing may also alleviate such resistance. Building on interactive marketing research that highlights the role of source–message congruence derived from CLT in promoting prosocial behavior [54,57], the current study applies this framework to the context of human–AI chatbot communication in online fundraising. The findings reveal that messages framed at a low construal level—those emphasizing concrete details, procedural information, objective data, and short-term outcomes—are more effective in enhancing trust and encouraging donation intentions than high-level, abstract messages. This pattern likely reflects consumers’ implicit belief that AI agents, compared with humans, lack subjective agency and moral judgment but are perceived as more rational, accurate, and objective communicators.
Second, this study contributes to research on algorithm aversion and perceived agency in AI-mediated communication. Previous research has mainly examined whether consumers reject algorithmic recommendations or decisions (e.g., [18], often focusing on task fit or transparency (e.g., [94]). The current study identifies a more fundamental cognitive tendency underlying algorithm aversion: consumers tend to perceive AI as a low-agency, low-construal agent. Rather than attempting to reverse this perception, the present research demonstrates that aligning communication design with this belief can improve persuasion and prosocial behavior. By strategically matching AI’s perceived low-level construal with concrete message framing, this study illustrates how leveraging consumer stereotypes can transform potential algorithm aversion into trust and acceptance. These results further reinforce prior evidence that consumers tend to trust AI more when it performs objective, data-driven tasks rather than subjective or value-based ones [95].
This research further contributes to trust transfer theory in interactive marketing by revealing how communication design enables trust to move from AI chatbots to the organizations they represent. Prior studies have shown that trust can transfer across related entities when credible and consistent cues exist—for instance, from influencer accounts to their posts [15] or through agents whose anthropomorphic features and contextual alignment [20]. Extending this logic to AI-mediated fundraising, the present research demonstrates that concrete, low-level message framing strengthens trust transfer by aligning with consumers’ perceptions of AI competence, whereas abstract, value-oriented framing without sufficient anthropomorphic cues weakens it.
Finally, this study also enriches the literature on anthropomorphism within human–AI interactive marketing contexts. Drawing on the CASA framework [8,96], anthropomorphism can arise not only from visual appearance [97] but also from behavioral and linguistic cues [88] that signal social presence and humanlike intentionality [13,98,99]. The current research shows that when abstract, value-oriented messages are combined with anthropomorphic chatbot designs, consumer resistance significantly decreases. Because value judgments and goal-setting reflect autonomous intention and are perceived as uniquely human capacities that artificial agents typically lack [100], anthropomorphic cues can reduce psychological distance and enhance the chatbot’s trustworthiness in interactive exchanges. This suggests that perceived anthropomorphism can moderate message-level effects by enabling consumers to attribute complex social intentions to nonhuman agents. Together, these findings contribute to interactive marketing theory by illustrating how the congruence between message construal and agent characteristics shapes trust formation, persuasion, and engagement in digitally mediated interactions.

7.2. Practical Implications

This study also provides several actionable insights for charitable organizations, nonprofit institutions, and enterprises employing AI chatbot technology in interactive fundraising campaigns. Given that consumers tend to perceive AI chatbots as operating at a low level of construal, organizations should assign them to deliver messages framed with concrete, data-driven features. Chatbots can emphasize implementation details, procedural steps, and short-term outcomes, while avoiding messages centered on abstract values, long-term visions, or moral appeals that may feel psychologically distant to consumers [22]. These insights offer valuable guidance for nonprofits in training and deploying AI communication agents effectively.
When messages inevitably involve abstract values, visions, or moral appeals, organizations should consider anthropomorphizing their AI chatbots through human-like visual design, conversational style, or interactive cues. At the same time, abstract appeals delivered by AI systems may be more persuasive among individuals who naturally perceive these technologies as more agentic and humanlike. Furthermore, nonprofit organizations can enhance message effectiveness by positioning AI communication agents as part of a human–AI team rather than as independent entities. For instance, the chatbot could introduce itself as a member of the charity’s project team and use inclusive language such as “we,” or emphasize human involvement and oversight (e.g., “Our team is here to ensure your donation makes a lasting impact”). Framing AI communication in this collaborative manner may increase social connectedness and make even abstract or vision-oriented messages more compelling to potential donors. In line with broader trends in new media marketing, these strategies underscore the importance of tailoring AI-mediated interactions to match consumer expectations of agent competence and message content [101].
Further, the evidence from Study 2b suggests that the preference for concrete communication with AI transcends cultural boundaries, reflecting a widely held lay belief that AI lacks human-like consciousness and agency. This belief may override Western consumers’ trait-level preference for abstract, value-oriented messages and indicates that cultural variations in AI acceptance do not fully offset this perception. Hence, in Western markets where moral and purpose-driven storytelling is common, adopting a concrete communication style when using AI chatbots may yield stronger persuasive effects.
Extending beyond nonprofit contexts, these insights can be generalized to a wider range of interactive marketing applications. The effectiveness of AI-mediated communication largely depends on the alignment between message construal and the perceived capabilities of the communication agent. AI chatbots tend to perform better when delivering product-specific or analytical information, whereas human agents or anthropomorphized AI are more effective in communicating aspirational, value-oriented narratives. Designing messages that align with the perceived nature of the agent offers a pragmatic guideline for enhancing consumer engagement and persuasion across both nonprofit and commercial settings [7,54].
Finally, ethical considerations are essential when deploying AI in fundraising. Organizations must ensure transparency in data use, protect donor privacy, and obtain informed consent when collecting or personalizing content based on personal data. Addressing these ethical concerns not only safeguards trust but also enhances the social legitimacy and societal relevance of AI-assisted fundraising practices.

7.3. Limitations and Future Directions

Several limitations should be acknowledged. First, the current study relied on self-reported donation intentions rather than actual donation behaviors. While intention serves as a useful measurement, it may not accurately capture real-world charitable giving due to contextual, emotional, and financial constraints. Future experiments and field studies should integrate behavioral tracking or longitudinal data to validate the causal links between chatbot persuasion, trust, and actual donation outcomes.
Second, the sample characteristics impose constraints on external validity. Although this study incorporated participants from both China and the United Kingdom, potential sample composition bias remains. Participants were relatively young and familiar with online communication, which may not reflect the broader donor population. Younger individuals may be more adaptive to AI-mediated persuasion and more accepting of anthropomorphic cues [102]. Future work could extend to more demographically diverse groups, including older or less technologically experienced donors, whose trust responses to AI systems may differ substantially.
Another limitation concerns the potential influence of AI adoption on consumers, which was not assessed in the present studies. Consumers with greater AI adoption might either feel emotionally connected to AI and view it as a social partner [103], thereby reducing the perceived gap between AI and abstract-framing information, or regard AI mainly as an efficient tool [104], which could reinforce the impression of AI as a communication agent with a lower construal level. These contrasting tendencies suggest that AI adoption may play a nuanced moderating role. Nevertheless, the consistent patterns observed across Chinese and British samples—contexts that differ in AI acceptance—suggest that perceiving AI as a low-level construal agent may be relatively robust. Future research could further examine how AI adoption and related tendencies (e.g., AI objectification, inclusion of AI in the self) influence the persuasiveness of different message framings in AI-mediated communication.
Fourth, while this research primarily focused on message construal level and anthropomorphism, future studies could further explore more potential mechanisms within the framework of construal level theory (CLT). For instance, perceived psychological distance between donors and beneficiaries may influence the effectiveness of human–AI chatbot interactions in charitable marketing [105] and shape consumers’ empathy toward recipients [106]. Language style also plays a critical role in persuasive communication, particularly in charitable fundraising, where narrative structure and linguistic framing can significantly shape emotional engagement and credibility perceptions [107]. Beyond abstract versus concrete framing, future research could investigate how different narrative modes, rhetorical devices, and message themes affect trust formation and donation willingness in AI-mediated interactions.
Fifth, due to the rapid advancement of artificial intelligence technology, consumers’ lay belief about AI might evolve over time. With AI progressing rapidly, some consumers might now perceive AI as highly sophisticated and somewhat capable of simulating human self-awareness. Such consumers may be less resistant to AI chatbots that use high-level construal communication frameworks for promoting donation behaviors and the sustainability of nonprofit organizations [34]. Longitudinal research could capture how evolving consumer beliefs reshape the effectiveness of different message framings in fundraising contexts.
Finally, this study focuses primarily on the fundraising function of AI chatbots. Yet, the application of AI in nonprofit organizations extends to virtual influencer design [108,109], project management, impact evaluation, donor relationship management, and beyond. Future research could further explore the use of AI in charitable project management, impact evaluation, resource allocation, and donor relationship management, as well as its potential implications. Such work could elucidate the specific task contexts in which AI might outperform humans in promoting moral and prosocial behaviors among consumers.

8. Conclusions

This study responds to the research gap regarding how AI chatbots, as emerging actors in interactive marketing, can be strategically designed to foster trust and prosocial behavior in online charitable fundraising. Prior research has largely emphasized consumers’ algorithm aversion or compared AI with human agents in persuasive contexts, but little is known about the communication style conditions under which AI agents can effectively engage consumers. Drawing on construal level theory, the present research shows that concrete message framing significantly enhances consumer trust in nonprofits and donation intentions when delivered by AI chatbots, whereas abstract framing is persuasive only when combined with anthropomorphic design. These findings extend theories of AI persuasion and human–AI interaction by identifying message framing as a boundary condition of AI effectiveness, and they contribute to the Special Issue by illustrating how new media technologies reshape interactive marketing strategies in the nonprofit sector. Practically, the study highlights how nonprofits can optimize AI-driven communication to reduce consumer resistance and build sustainable trust. Limitations concerning cultural context and experimental scope suggest avenues for future research, including cross-cultural validation and the exploration of AI applications across broader nonprofit marketing functions.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jtaer20040341/s1, data used in Study 1; data used in Study 2a and 2b; data used in Study 3.

Author Contributions

Conceptualization, J.S. (Jin Sun) and J.S. (Jia Si); methodology, J.S. (Jin Sun) and J.S. (Jia Si); software, J.S. (Jia Si); validation, J.S. (Jin Sun) and J.S. (Jia Si); formal analysis, J.S. (Jia Si); investigation, J.S. (Jin Sun) and J.S. (Jia Si); resources, J.S. (Jin Sun) and J.S. (Jia Si).; data curation, J.S. (Jia Si); writing—original draft preparation, J.S. (Jin Sun) and J.S. (Jia Si); writing—review and editing, J.S. (Jin Sun) and J.S. (Jia Si); visualization, J.S. (Jin Sun) and J.S. (Jia Si); supervision, J.S. (Jin Sun); project administration, J.S. (Jin Sun).; funding acquisition, J.S. (Jin Sun). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 72272033.

Institutional Review Board Statement

This study constitutes non-interventional social science research. The study involves no physical, psychological, or clinical interventions and collects no sensitive or personally identifiable data (e.g., names, identification numbers, contact details, or medical history). The study posed no more than minimal risk, did not involve vulnerable populations, deception, or sensitive topics. All data were collected anonymously and cannot be linked to any individual participant; no personally identifiable information was stored; questionnaire data were securely maintained and used solely for academic research; findings are presented only in aggregated statistical form to prevent re-identification. In accordance with Article 26 of the EU General Data Protection Regulation (GDPR), which exempts anonymized data from data protection principles, and the Ethical Review Measures for Life Sciences and Medical Research Involving Humans (National Health Commission of the People’s Republic of China, 2023), which explicitly exempts non-interventional anonymous questionnaire-based studies involving no sensitive data or vulnerable groups, this project does not require ethics committee approval.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data of Study 1, Study 2a, Study 2b, and Study 3 presented in this study are available in Supplementary Materials.

Acknowledgments

We thank Tonglin Wu (Research Fellow of Chnenergy Institute of Technology and Economics) for helping polish the article and providing technical support during the data analysis phase of this research.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1. Online donation campaign interface used in Study 1 (illustrated with the AI chatbot—concrete framing condition).
Figure A1. Online donation campaign interface used in Study 1 (illustrated with the AI chatbot—concrete framing condition).
Jtaer 20 00341 g0a1
Figure A2. Customer service communication scripts used in Study 1.
Figure A2. Customer service communication scripts used in Study 1.
Jtaer 20 00341 g0a2
Figure A3. Customer service communication scripts used in Study 2a (concrete framing condition).
Figure A3. Customer service communication scripts used in Study 2a (concrete framing condition).
Jtaer 20 00341 g0a3
Figure A4. Customer service communication scripts used in Study 2a (abstract framing condition).
Figure A4. Customer service communication scripts used in Study 2a (abstract framing condition).
Jtaer 20 00341 g0a4
Figure A5. Customer service communication scripts used in Study 2b (concrete framing conditions).
Figure A5. Customer service communication scripts used in Study 2b (concrete framing conditions).
Jtaer 20 00341 g0a5
Figure A6. Customer service communication scripts used in Study 2b (abstract framing conditions).
Figure A6. Customer service communication scripts used in Study 2b (abstract framing conditions).
Jtaer 20 00341 g0a6
Figure A7. Customer service communication scripts used in Study 3 (concrete framing condition).
Figure A7. Customer service communication scripts used in Study 3 (concrete framing condition).
Jtaer 20 00341 g0a7
Figure A8. Customer service communication scripts used in Study 3 (abstract framing condition).
Figure A8. Customer service communication scripts used in Study 3 (abstract framing condition).
Jtaer 20 00341 g0a8

Appendix B

Table A1. Summary of measures used in the studies.
Table A1. Summary of measures used in the studies.
VariablesMeasuresReliability
Perceived message credibility (Study 2a/b and 3) [86]To what extent do you feel that this AI/human agent is …α = 0.923 (Study 2a); α = 0.965 (Study 2b);
α = 0.914 (Study 3)
Dependable—Undependable
Honest—Dishonest
Reliable—Unreliable
Sincere—Insincere
Trustworthy—Untrustworthy
Expert—Not an expert
Experienced—Inexperienced
Knowledgeable—Unknowledgeable
Qualified—Unqualified
Skilled—Unskilled
Response format: 7-point Stapel scale (1 = strongly agree with the statement on the left, 7 = strongly agree with the statement on the right).
Trust in charitable organization (Study 1, 2 a/b, 3) [78];I have no doubt this charitable organization can be trusted.
I feel that this charitable organization is reliable.
I trust this charitable organization.
Response format: 7-point Likert scale (1 = Definitely not, 7 = Definitely yes)
α = 0.834 (Study 1);
α = 0.825 (Study 2a);
α = 0.955 (Study 2b);
α = 0.845 (Study 3)
Donation intention (Study 1, 2 a/b, 3) [79]I am willing to donate to this project through the organization.
I am likely to donate to this project through the organization.
Response format: 7-point Likert scale (1 = Definitely not, 7 = Definitely yes)
α = 0.807 (Study 1);
α = 0.825 (Study 2a); α = 0.964 (Study 2b);
α = 0.750 (Study 3)
Positive emotion
(Study 2 a/b) [87]
Your conversation with the online customer service …
made you feel happy.
elicited positive feelings.
elicited feelings of joy.
left you with a good feeling.
(Response format: 7-point Likert scale, 1= Strongly disagree, 7 = Strongly agree)
α = 0.926
Negative emotion
(Study 2 a/b) [87]
Your conversation with the online customer service …
made you feel sad.
elicited negative feelings.
Left you with a bad feeling
(Response format: 7-point Likert scale, 1= Strongly disagree, 7 = Strongly agree)
α = 0.885
Table A2. Summary of randomization check of Study 1–3. Note: number of participants in each condition (N), the arithmetic means of the demographic variables, and the corresponding F-test results are reported; gender and education are dummy variables; for gender, 0 = male, 1 = female; for education level, 1 = High school, 2 = BTEC Diploma/Foundation Degree/Higher National Diploma, 3 = Bachelor’s degree, 4 = Master’s degree, 5 = Doctoral Degree.
Table A2. Summary of randomization check of Study 1–3. Note: number of participants in each condition (N), the arithmetic means of the demographic variables, and the corresponding F-test results are reported; gender and education are dummy variables; for gender, 0 = male, 1 = female; for education level, 1 = High school, 2 = BTEC Diploma/Foundation Degree/Higher National Diploma, 3 = Bachelor’s degree, 4 = Master’s degree, 5 = Doctoral Degree.
ConditionNGenderEducationAge
Pretest of Study 1
Concrete framing500.643.0036.18
Abstract framing500.542.8637.14
F-value 1.020.800.17
p-value 0.310.370.68
Study 1
AI chatbot customer service concrete framing710.683.0130.31
AI chatbot customer service abstract framing670.663.1231.99
Human customer service concrete framing690.753.0030.41
Human customer service abstract framing700.773.0431.93
F-value 1.080.651.26
p-value 0.360.580.29
Pretest of Study 2a
Concrete framing490.612.9636.82
Abstract framing500.562.8636.24
F-value 0.270.350.06
p-value 0.600.560.81
Study 2a
AI chatbot customer service concrete framing750.693.0030.63
AI chatbot customer service abstract framing750.653.0129.47
Human customer service concrete framing730.732.8830.49
Human customer service abstract framing760.683.0431.32
F-value 0.310.820.75
p-value 0.820.490.52
Pretest of Study 2b
Concrete framing520.542.5843.00
Abstract framing450.442.3646.53
F-value 0.781.011.83
p-value 0.380.320.18
Study 2b
AI chatbot customer service concrete framing700.492.4343.13
AI chatbot customer service abstract framing700.592.6642.86
Human customer service concrete framing700.412.5343.09
Human customer service abstract framing690.432.6645.28
F-value 1.650.720.54
p-value 0.180.540.66
Study 3
AI chatbot customer service concrete framing660.652.9732.52
AI chatbot customer service abstract framing640.643.1329.56
Human customer service concrete framing690.652.9332.70
Human customer service abstract framing660.763.0330.79
Anthropomorphized AI chatbot customer service concrete framing650.623.0031.89
Anthropomorphized AI chatbot customer service abstract framing690.772.8830.09
F-value 1.301.081.85
p-value 0.260.370.10

References

  1. Grand View Research. Conversational AI Market Size, Share|Industry Report, 2030. 2023. Available online: https://www.grandviewresearch.com/industry-analysis/conversational-ai-market-report (accessed on 30 January 2025).
  2. Climate Reality Project Climate Reality’s New Rapid Response Team Is a Facebook Bot for Good, Not Evil. CleanTechnica 2017. Available online: https://www.climaterealityproject.org/ (accessed on 30 January 2025).
  3. Winstead, E. Can AI Chatbots Correctly Answer Questions About Cancer?—NCI. Cancer Currents Blog 2023. Available online: https://www.cancer.gov/news-events/cancer-currents-blog/2023/chatbots-answer-cancer-questions (accessed on 30 January 2025).
  4. Arotherham. New AI Newsletter—Leading Indicator: AI in Education. Eduwonk 2024. Available online: https://www.eduwonk.com/2024/06/new-ai-newsletter-from-bellwether-leading-indicator-ai-in-education.html (accessed on 30 January 2025).
  5. Sundar, S.S. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). J. Comput. Mediat. Commun. 2020, 25, 74–88. [Google Scholar] [CrossRef]
  6. Jin, E.; Eastin, M.S. Birds of a feather flock together: Matched personality effects of product recommendation chatbots and users. J. Res. Interact. Mark. 2023, 17, 416–433. [Google Scholar] [CrossRef]
  7. Wei, Y.; Syahrivar, J.; Simay, A.E. Unveiling the influence of anthropomorphic chatbots on consumer behavioral intentions: Evidence from China and Indonesia. J. Res. Interact. Mark. 2025, 19, 132–157. [Google Scholar] [CrossRef]
  8. Wang, C.L. Editorial—What is an interactive marketing perspective and what are emerging research areas? J. Res. Interact. Mark. 2024, 18, 161–165. [Google Scholar] [CrossRef]
  9. Zhou, Y.; Fei, Z.; He, Y.; Yang, Z. How human-chatbot interaction impairs charitable giving: The role of moral judgment. J. Bus. Ethics 2022, 178, 849–865. [Google Scholar] [CrossRef]
  10. Lv, L.; Huang, M. Can personalized recommendations in charity advertising boost donation? The role of perceived autonomy. J. Advert. 2022, 53, 36–53. [Google Scholar] [CrossRef]
  11. Kim, S.; Priluck, R. Consumer responses to generative AI chatbots versus search engines for product evaluation. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 93. [Google Scholar] [CrossRef]
  12. Yang, C.; Yang, Y.; Zhang, Y. Understanding the impact of artificial intelligence on the justice of charitable giving: The moderating role of trust and regulatory orientation. J. Consum. Behav. 2024, 23, 2624–2636. [Google Scholar] [CrossRef]
  13. Quach, S.; Cheah, I.; Thaichon, P. The power of flattery: Enhancing prosocial behavior through virtual influencers. Psychol. Mark. 2024, 41, 1629–1648. [Google Scholar] [CrossRef]
  14. Watson, J.; Valsesia, F.; Segal, S. Assessing AI receptivity through a persuasion knowledge lens. Curr. Opin. Psychol. 2024, 58, 101834. [Google Scholar] [CrossRef]
  15. Liao, C.H.; Hsieh, J.-K.; Kumar, S. Does the verified badge of social media matter? The perspective of trust transfer theory. J. Res. Interact. Mark. 2024, 18, 1017–1033. [Google Scholar] [CrossRef]
  16. Akdemir, D.M.; Bulut, Z.A. Business and customer-based chatbot activities: The role of customer satisfaction in online purchase intention and intention to reuse chatbots. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 2961–2979. [Google Scholar] [CrossRef]
  17. Qiu, X.; Wang, Y.; Zeng, Y.; Cong, R. Artificial intelligence disclosure in cause-related marketing: A persuasion knowledge perspective. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 193. [Google Scholar] [CrossRef]
  18. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Supplemental material for algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef]
  19. Longoni, C.; Cian, L. Artificial intelligence in utilitarian vs. hedonic contexts: The “word-of-machine” effect. J. Mark. 2022, 86, 91–108. [Google Scholar] [CrossRef]
  20. Qu, Y.; Baek, E. Let Virtual creatures stay virtual: Tactics to increase trust in virtual influencers. J. Res. Interact. Mark. 2024, 18, 91–108. [Google Scholar] [CrossRef]
  21. Yoo, J.W.; Park, J.; Park, H. How can I trust you if you’re fake? Understanding human-like virtual influencer credibility and the role of textual social cues. J. Res. Interact. Mark. 2025, 19, 730–748. [Google Scholar] [CrossRef]
  22. Trope, Y.; Liberman, N. Construal-level theory of psychological distance. Psychol. Rev. 2010, 117, 440–463. [Google Scholar] [CrossRef]
  23. Chiarella, S.; Torromino, G.; Gagliardi, D.; Rossi, D.; Babiloni, F.; Cartocci, G. Investigating the negative bias towards artificial intelligence: Effects of prior assignment of AI-authorship on the aesthetic appreciation of abstract paintings. Comput. Hum. Behav. 2022, 137, 107406. [Google Scholar] [CrossRef]
  24. Ahn, R.J.; Cho, S.Y.; Tsai, W.S. Demystifying computer-generated imagery (CGI) influencers: The effect of perceived anthropomorphism and social presence on brand outcomes. J. Interact. Advert. 2022, 22, 327–335. [Google Scholar] [CrossRef]
  25. Tran, A.D.; Pallant, J.I.; Johnson, L.W. Exploring the impact of chatbots on consumer sentiment and expectations in retail. J. Retail. Consum. Serv. 2021, 63, 102718. [Google Scholar] [CrossRef]
  26. Pitardi, V.; Marriott, H.R. Alexa, She’s not human but… Unveiling the drivers of consumers’ trust in voice—Based artificial intelligence. Psychol. Mark. 2021, 38, 626–642. [Google Scholar] [CrossRef]
  27. Sundar, S.S.; Nass, C. Source Orientation in human-computer interaction: Programmer, networker, or independent social actor. Commun. Res. 2000, 27, 683–703. [Google Scholar] [CrossRef]
  28. Zhu, J.; Jiang, Y.; Wang, X.; Huang, S. Social- or task-oriented: How does social crowding shape consumers’ preferences for chatbot conversational styles? J. Res. Interact. Mark. 2023, 17, 641–662. [Google Scholar] [CrossRef]
  29. Belanche, D.; Casaló, L.V.; Flavián, M.; Ibáñez-Sánchez, S. Understanding influencer marketing: The role of congruence between influencers, products and consumers. J. Bus. Res. 2021, 132, 186–195. [Google Scholar] [CrossRef]
  30. Chen, S.; Li, X.; Liu, K.; Wang, X. Chatbot or human? The impact of online customer service on consumers’ purchase intentions. Psychol. Mark. 2023, 40, 2186–2200. [Google Scholar] [CrossRef]
  31. Khamitov, M.; Rajavi, K.; Huang, D.-W.; Hong, Y. Consumer trust: Meta-analysis of 50 years of empirical research. J. Consum. Res. 2024, 51, 7–18. [Google Scholar] [CrossRef]
  32. Bekkers, R.; Wiepking, P. Accuracy of self-reports on donations to charitable organizations. Qual. Quant. 2011, 45, 1369–1383. [Google Scholar] [CrossRef]
  33. Hibbert, S.; Horne, S. Giving to charity: Questioning the donor decision process. J. Consum. Mark. 1996, 13, 4–13. [Google Scholar] [CrossRef]
  34. Kumar, A.; Chakrabarti, S. Charity donor behavior: A systematic literature review and research agenda. J. Nonprofit Public Sect. Mark. 2023, 35, 1–46. [Google Scholar] [CrossRef]
  35. Davis, M.H.; Mitchell, K.V.; Hall, J.A.; Lothert, J.; Snapp, T.; Meyer, M. Empathy, expectations, and situational preferences: Personality influences on the decision to participate in volunteer helping behaviors. J. Pers. 1999, 67, 469–503. [Google Scholar] [CrossRef] [PubMed]
  36. Aquino, K.; Reed, A. The self-importance of moral identity. J. Pers. Soc. Psychol. 2002, 83, 1423–1440. [Google Scholar] [CrossRef] [PubMed]
  37. Brañas-Garza, P.; Capraro, V.; Rascón-Ramírez, E. Gender differences in altruism on mechanical Turk: Expectations and actual behaviour. Econ. Lett. 2018, 170, 19–23. [Google Scholar] [CrossRef]
  38. James, R.N.; Sharpe, D.L. The nature and causes of the U-shaped charitable giving profile. Nonprofit Volunt. Sect. Q. 2007, 36, 218–238. [Google Scholar] [CrossRef]
  39. Mathur, A. Older adults’ motivations for gift giving to charitable organizations: An exchange theory perspective. Psychol. Mark. 1996, 13, 107–123. [Google Scholar] [CrossRef]
  40. Penner, L.A.; Dovidio, J.F.; Piliavin, J.A.; Schroeder, D.A. Prosocial behavior: Multilevel perspectives. Annu. Rev. Psychol. 2005, 56, 365–392. [Google Scholar] [CrossRef]
  41. Omoto, A.M. Processes of Community Change and Social Action; Lawrence Erlbaum: Mahwah, NJ, USA, 2005; ISBN 978-0-8058-4393-4. [Google Scholar]
  42. Smith, R.W.; Faro, D.; Burson, K.A. More for the many: The influence of entitativity on charitable giving. J. Consum. Res. 2013, 39, 961–976. [Google Scholar] [CrossRef]
  43. Cryder, C.E.; Loewenstein, G.; Scheines, R. The donor is in the details. Organ. Behav. Hum. Decis. Process. 2013, 120, 15–23. [Google Scholar] [CrossRef]
  44. Gefen, D. E-Commerce: The role of familiarity and trust. Omega 2000, 28, 725–737. [Google Scholar] [CrossRef]
  45. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709. [Google Scholar] [CrossRef]
  46. Wymer, W.; Becker, A.; Boenigk, S. The antecedents of charity trust and its influence on charity supportive behavior. J. Philanthr. Mark. 2021, 26, e1690. [Google Scholar] [CrossRef]
  47. McKnight, D.H.; Choudhury, V.; Kacmar, C. Developing and validating trust measures for E-commerce: An integrative typology. Inf. Syst. Res. 2002, 13, 334–359. [Google Scholar] [CrossRef]
  48. Khavas, Z.R. A review on trust in human-robot interaction. arXiv 2024, arXiv:2105.10045. [Google Scholar]
  49. Wang, W.; Qiu, L.; Kim, D.; Benbasat, I. Effects of rational and social appeals of online recommendation agents on cognition- and affect-based trust. Decis. Support Syst. 2016, 86, 48–60. [Google Scholar] [CrossRef]
  50. Sargeant, A.; West, D.C.; Ford, J.B. Does perception matter? An empirical analysis of donor behaviour. Serv. Ind. J. 2004, 24, 19–36. [Google Scholar] [CrossRef]
  51. Etzioni, A. Cyber trust. J. Bus. Ethics 2019, 156, 1–13. [Google Scholar] [CrossRef]
  52. Stewart, K.J. Trust transfer on the world wide web. Org. Sci. 2003, 14, 5–17. [Google Scholar] [CrossRef]
  53. Eyal, T.; Liberman, N. Morality and psychological distance: A construal level theory perspective. In The Social Psychology of Morality: Exploring the Causes of Good and Evil; American Psychological Association: Washington, DC, USA, 2012; pp. 185–202. [Google Scholar]
  54. Huang, T.-L.; Chung, H.F.L. Achieving close psychological distance and experiential value in the MarTech Servicescape: A mindfulness-oriented service perspective. J. Res. Interact. Mark. 2025, 19, 358–386. [Google Scholar] [CrossRef]
  55. Lee, S.J.; Brennan, E.; Gibson, L.A.; Tan, A.S.L.; Kybert-Momjian, A.; Liu, J.; Hornik, R. Predictive validity of an empirical approach for selecting promising message topics: A randomized-controlled study: Validating a message topic selection approach. J. Commun. 2016, 66, 433–453. [Google Scholar] [CrossRef]
  56. Lee, S.J. The role of construal level in message effects research: A review and future directions. Commun. Theory 2019, 29, 231–250. [Google Scholar] [CrossRef]
  57. Wu, J.; Peng, X. Persuasive and Substantive: The impact of matching charity advertising appeal with numeric format on individual donation intentions. J. Res. Interact. Mark. 2025; ahead-of-print. [Google Scholar] [CrossRef]
  58. Vallacher, R.R.; Wegner, D.M. Levels of personal agency: Individual variation in action identification. J. Pers. Soc. Psychol. 1989, 57, 660–671. [Google Scholar] [CrossRef]
  59. Kim, T.W.; Duhachek, A. Artificial Intelligence and persuasion: A construal-level account. Psychol. Sci. 2020, 31, 363–380. [Google Scholar] [CrossRef] [PubMed]
  60. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, Global Edition; Pearson Education: London, UK, 2021. [Google Scholar]
  61. Teeny, J.; Briñol, P.; Petty, R.E. The elaboration likelihood model: Understanding consumer attitude change. In Routledge International Handbook of Consumer Psychology; Routledge: Oxfordshire, UK, 2016. [Google Scholar]
  62. Briñol, P.; Petty, R.E. Source factors in persuasion: A self-validation approach. Eur. Rev. Soc. Psychol. 2009, 20, 49–96. [Google Scholar] [CrossRef]
  63. Jackob, N. Credibility Effects. In The International Encyclopedia of Communication; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2008. [Google Scholar]
  64. Appelman, A.; Sundar, S.S. Measuring message credibility: Construction and validation of an exclusive scale. J. Mass Commun. Q. 2016, 93, 59–79. [Google Scholar] [CrossRef]
  65. Burgoon, J.K.; Birk, T.; Pfau, M. Nonverbal behaviors, persuasion, and credibility. Hum. Commun. Res. 1990, 17, 140–169. [Google Scholar] [CrossRef]
  66. Burgoon, J.K.; Bonito, J.A.; Bengtsson, B.; Cederberg, C.; Lundeberg, M.; Allspach, L. Interactivity in human–computer interaction: A study of credibility, understanding, and influence. Comput. Hum. Behav. 2000, 16, 553–574. [Google Scholar] [CrossRef]
  67. Sun, J.; Keh, H.T.; Lee, A.Y. Shaping consumer preference using alignable attributes: The roles of regulatory orientation and construal level. Int. J. Res. Mark. 2019, 36, 151–168. [Google Scholar] [CrossRef]
  68. Jäger, A.-K.; Weber, A. Can you believe it? The effects of benefit type versus construal level on advertisement credibility and purchase intention for organic food. J. Clean. Prod. 2020, 257, 120543. [Google Scholar] [CrossRef]
  69. Liu, L.; Lee, M.K.O.; Liu, R.; Chen, J. Trust transfer in social media brand communities: The role of consumer engagement. Int. J. Inf. Manag. 2018, 41, 1–13. [Google Scholar] [CrossRef]
  70. Waytz, A.; Cacioppo, J.; Epley, N. Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect. Psychol. Sci. 2010, 5, 219–232. [Google Scholar] [CrossRef]
  71. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A Survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef]
  72. Sundar, S.S.; Kim, J. Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; ACM: New York, NY, USA, 2019; pp. 1–9. [Google Scholar]
  73. Pizzi, G.; Vannucci, V.; Mazzoli, V.; Donvito, R. I, Chatbot! The impact of anthropomorphism and gaze direction on willingness to disclose personal information and behavioral intentions. Psychol. Mark. 2023, 40, 1372–1387. [Google Scholar] [CrossRef]
  74. van Pinxteren, M.M.E.; Wetzels, R.W.H.; Rüger, J.; Pluymaekers, M.; Wetzels, M. Trust in humanoid robots: Implications for services marketing. J. Serv. Mark. 2019, 33, 507–518. [Google Scholar] [CrossRef]
  75. Uysal, E.; Alavi, S.; Bezençon, V. Trojan horse or useful helper? A relationship perspective on artificial intelligence assistants with humanlike features. J. Acad. Mark. Sci. 2022, 50, 1153–1175. [Google Scholar] [CrossRef]
  76. Waytz, A.; Heafner, J.; Epley, N. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 2014, 52, 113–117. [Google Scholar] [CrossRef]
  77. Xu, J.; Li, Z.; Wang, X.; Xia, C. Narrative information on secondhand products in E-commerce. Mark. Lett. 2022, 33, 625–644. [Google Scholar] [CrossRef]
  78. Li, F.; Kashyap, R.; Zhou, N.; Yang, Z. Brand trust as a second-order factor: An alternative measurement model. Int. J. Mark. Res. 2008, 50, 817–839. [Google Scholar] [CrossRef]
  79. Grinstein, A.; Kronrod, A. Does sparing the rod spoil the child? How praising, scolding, and an assertive tone can encourage desired behaviors. J. Mark. Res. 2016, 53, 433–441. [Google Scholar] [CrossRef]
  80. Hamilton, R.W.; Thompson, D.V. Is there a substitute for direct experience? Comparing consumers’ preferences after direct and indirect product experiences. J. Consum. Res. 2007, 34, 546–555. [Google Scholar] [CrossRef]
  81. Zhang, J.; Xu, X.; Keh, H.T. I implement, they deliberate: The matching effects of point of view and mindset on consumer attitudes. J. Bus. Res. 2022, 139, 397–410. [Google Scholar] [CrossRef]
  82. Humphreys, A.; Isaac, M.S.; Wang, R.J.-H. Construal matching in online search: Applying text analysis to illuminate the consumer decision journey. J. Mark. Res. 2021, 58, 1101–1119. [Google Scholar] [CrossRef]
  83. Semin, G.R.; Fiedler, K. The cognitive functions of linguistic categories in describing persons: Social cognition and language. J. Pers. Soc. Psychol. 1988, 54, 558–568. [Google Scholar] [CrossRef]
  84. Kirk, C.P.; Givi, J. The AI-authorship effect: Understanding authenticity, moral disgust, and consumer responses to AI-generated marketing communications. J. Bus. Res. 2025, 186, 114984. [Google Scholar] [CrossRef]
  85. Arango, L.; Singaraju, S.P.; Niininen, O. Consumer responses to AI-generated charitable giving Ads. J. Advert. 2023, 52, 486–503. [Google Scholar] [CrossRef]
  86. Ohanian, R. Construction and validation of a scale to measure celebrity endorsers’ perceived expertise, trustworthiness, and attractiveness. J. Advert. 1990, 19, 39–52. [Google Scholar] [CrossRef]
  87. Di Muro, F.; Murray, K.B. An arousal regulation explanation of mood effects on consumer choice. J. Consum. Res. 2012, 39, 574–584. [Google Scholar] [CrossRef]
  88. Singelis, T.M. The measurement of independent and interdependent self-construals. Pers. Soc. Psychol. Bull. 1994, 20, 580–591. [Google Scholar] [CrossRef]
  89. Kühnen, U.; Oyserman, D. Thinking about the self influences thinking in general: Cognitive consequences of salient self-concept. J. Exp. Soc. Psychol. 2002, 38, 492–499. [Google Scholar] [CrossRef]
  90. Yam, K.C.; Tan, T.; Jackson, J.C.; Shariff, A.; Gray, K. Cultural differences in people’s reactions and applications of robots, algorithms, and artificial intelligence. Manag. Organ. Rev. 2023, 19, 859–875. [Google Scholar] [CrossRef]
  91. Douglas, B.D.; Ewell, P.J.; Brauer, M. Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA. PLoS ONE 2023, 18, e0279720. [Google Scholar] [CrossRef]
  92. Dabholkar, P.A.; van Dolen, W.M.; de Ruyter, K. A dual-sequence framework for B2C relationship formation: Moderating effects of employee communication style in online group chat. Psychol. Mark. 2009, 26, 145–174. [Google Scholar] [CrossRef]
  93. Rashid Saeed, M.; Khan, H.; Lee, R.; Lockshin, L.; Bellman, S.; Cohen, J.; Yang, S. Construal level theory in advertising research: A systematic review and directions for future research. J. Bus. Res. 2024, 183, 114870. [Google Scholar] [CrossRef]
  94. Qin, X.; Zhou, X.; Chen, C.; Wu, D.; Zhou, H.; Dong, X.; Cao, L.; Lu, J.G. AI aversion or appreciation? A capability–personalization framework and a Meta-analytic review. Psychol. Bull. 2025, 151, 580–599. [Google Scholar] [CrossRef]
  95. Castelo, N.; Bos, M.; Lehmann, D. Task-dependent algorithm aversion. J. Mark. Res. 2019, 56, 809–825. [Google Scholar] [CrossRef]
  96. Gambino, A.; Fox, J.; Ratan, R.A. Building a stronger CASA: Extending the computers are social actors paradigm. Hum.-Mach. Commun. 2020, 1, 71–85. [Google Scholar] [CrossRef]
  97. Wang, Z.; Harris, G. From tools to trusted allies: Understanding how IoT devices foster deep consumer relationships and enhance CRM. J. Res. Interact. Mark. 2025; ahead-of-print. [Google Scholar] [CrossRef]
  98. Wang, C.L. (Ed.) The Palgrave Handbook of Interactive Marketing; Springer International Publishing: Berlin/Heidelberg, Germany, 2023; ISBN 978-3-031-14960-3. [Google Scholar]
  99. Tsai, W.-H.S.; Liu, Y.; Chuan, C.-H. How chatbots’ social presence communication enhances consumer engagement: The mediating role of parasocial interaction and dialogue. Res. Interact. Mark. 2021, 15, 460–482. [Google Scholar] [CrossRef]
  100. Yam, K.C.; Eng, A.; Gray, K. Machine replacement: A mind-role fit perspective. Annu. Rev. Organ. Psychol. Organ. Behav. 2025, 12, 239–267. [Google Scholar] [CrossRef]
  101. Chappell, N.; Rosenkrans, S. Nonprofit AI: A Comprehensive Guide to Implementing Artificial Intelligence for Social Good; Wiley: Hoboken, NJ, USA, 2025. [Google Scholar]
  102. Lalot, F.; Bertram, A.-M. When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbot. J. Exp. Psychol. Gen. 2025, 154, 533–551. [Google Scholar] [CrossRef]
  103. Ding, Y.; Najaf, M. Interactivity, humanness, and trust: A psychological approach to ai chatbot adoption in e-commerce. BMC Psychol. 2024, 12, 595. [Google Scholar] [CrossRef]
  104. Yang, Y.; Luo, J.; Lan, T. An empirical assessment of a modified artificially intelligent device use acceptance model—From the task-oriented perspective. Front. Psychol. 2022, 13, 975307. [Google Scholar] [CrossRef]
  105. Jiang, Y.; Lei, J. I will recommend a salad but choose a burger for you: The effect of decision tasks and social distance on food decisions for others. J. Appl. Bus. Behav. Sci. 2025, 1, 98–117. [Google Scholar] [CrossRef]
  106. Song, W.; Zou, Y.; Zhao, T.; Huang, E.; Jin, X. The effects of public health emergencies on altruistic behaviors: A dilemma arises between safeguarding personal safety and helping others. J. Appl. Bus. Behav. Sci. 2025, 1, 85–97. [Google Scholar] [CrossRef]
  107. Robiady, N.D.; Windasari, N.A.; Nita, A. Customer engagement in online social crowdfunding: The influence of storytelling technique on donation performance. Int. J. Res. Mark. 2021, 38, 492–500. [Google Scholar] [CrossRef]
  108. Lee, H.; Weng, J.-Y.; Chen, K.Y. Interactive marketing and instant donations: Psychological drivers of virtual YouTuber followers’ contributions. J. Res. Interact. Mark. 2025; ahead-of-print. [Google Scholar] [CrossRef]
  109. Zheng, X.; Cui, C.; Zhang, C.; Li, D. Who says what? How message appeals shape virtual- versus human-influencers’ impact on consumer engagement. J. Res. Interact. Mark. 2025; ahead-of-print. [Google Scholar] [CrossRef]
Figure 1. Proposed theoretical framework.
Figure 1. Proposed theoretical framework.
Jtaer 20 00341 g001
Figure 2. The effect of messaging framing and agent type on trust in charitable organization (Study 1).
Figure 2. The effect of messaging framing and agent type on trust in charitable organization (Study 1).
Jtaer 20 00341 g002
Figure 3. The mediating effect of perceived message credibility, positive emotions, and negative emotions (Study 2a).** p < 0.01, *** p < 0.001.
Figure 3. The mediating effect of perceived message credibility, positive emotions, and negative emotions (Study 2a).** p < 0.01, *** p < 0.001.
Jtaer 20 00341 g003
Figure 4. The effect of messaging framing and agent type on trust in charitable organization (Study 2b).
Figure 4. The effect of messaging framing and agent type on trust in charitable organization (Study 2b).
Jtaer 20 00341 g004
Figure 5. The effect of agent type, anthropomorphism, and message framing on trust in charitable organization.
Figure 5. The effect of agent type, anthropomorphism, and message framing on trust in charitable organization.
Jtaer 20 00341 g005
Table 1. Summary of hypothesis testing results.
Table 1. Summary of hypothesis testing results.
HypothesisStatistical
Indicator
Study 1Study 2aStudy 2bStudy 3
H1a: When the communication agent is an AI chatbot (vs. a human), using concrete (vs. abstract) message framing will more effectively enhance consumers’ trust in the charitable organization.effect size (η2)0.0350.0170.035
significance (p-value)0.0020.0070.002
H1b: When the communication agent is an AI chatbot (vs. a human), using concrete (vs. abstract) message framing will more effectively enhance consumers’ donation intention.effect size (η2)0.0220.0140.019
significance (p-value)0.0130.0410.020
H2a: Perceived message credibility mediates the interactive effect of communication agent type (AI chatbot vs. human) and message framing type (concrete vs. abstract) on consumers’ trust in the charitable organization.index of moderated mediation0.5780.489
95% confidence interval[0.140, 1.026][0.080, 0.934]
H2b: Perceived message credibility mediates the interactive effect of communication agent type (AI chatbot vs. human) and message framing type (concrete vs. abstract) on consumers’ donation intention.index of moderated mediation0.6110.496
95% confidence interval[0.151, 1.065][0.088, 0.977]
H3a: The level of anthropomorphism moderates the interactive effect of communication agent type (AI chatbot vs. human) and message framing type (concrete vs. abstract) on consumers’ trust in charitable organizations via perceived information credibility.index of moderated mediation0.726
95% confidence interval[0.434, 1.053]
H3b: Anthropomorphism moderates the interactive effect of communication agent type (AI chatbot vs. human) and message framing type (concrete vs. abstract) on consumers’ donation intention via perceived information credibility.index of moderated mediation0.658
95% confidence interval[0.395, 0.972]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, J.; Si, J. When AI Chatbots Ask for Donations: The Construal Level Contingency of AI Persuasion Effectiveness in Charity Human–Chatbot Interaction. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 341. https://doi.org/10.3390/jtaer20040341

AMA Style

Sun J, Si J. When AI Chatbots Ask for Donations: The Construal Level Contingency of AI Persuasion Effectiveness in Charity Human–Chatbot Interaction. Journal of Theoretical and Applied Electronic Commerce Research. 2025; 20(4):341. https://doi.org/10.3390/jtaer20040341

Chicago/Turabian Style

Sun, Jin, and Jia Si. 2025. "When AI Chatbots Ask for Donations: The Construal Level Contingency of AI Persuasion Effectiveness in Charity Human–Chatbot Interaction" Journal of Theoretical and Applied Electronic Commerce Research 20, no. 4: 341. https://doi.org/10.3390/jtaer20040341

APA Style

Sun, J., & Si, J. (2025). When AI Chatbots Ask for Donations: The Construal Level Contingency of AI Persuasion Effectiveness in Charity Human–Chatbot Interaction. Journal of Theoretical and Applied Electronic Commerce Research, 20(4), 341. https://doi.org/10.3390/jtaer20040341

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop