Previous Article in Journal
Do Promotions Make Consumers More Wasteful? The Effect of Price Promotion on Consumer Food Waste Behavior
 
 
Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Anthropomorphic AI and Consumer Skepticism: A Behavioral Study of Trust and Adoption in Fragile Economies

by
Agnes Caroline Dontina Mackay
1,*,
Li Zuo
1 and
Ibrahim Alusine Kebe
1,2
1
School of Economics and Management, Beijing Jiaotong University, Beijing 100044, China
2
Institute of Public Administration and Management, University of Sierra Leone, Freetown 999127, Sierra Leone
*
Author to whom correspondence should be addressed.
Behav. Sci. 2026, 16(4), 496; https://doi.org/10.3390/bs16040496 (registering DOI)
Submission received: 21 January 2026 / Revised: 20 February 2026 / Accepted: 2 March 2026 / Published: 27 March 2026

Abstract

This study examines the psychological mechanisms through which anthropomorphic artificial intelligence (AI) relates to consumer adoption intentions in fragile, low-trust economies. Integrating the Stimulus–Organism–Response framework with the Computers Are Social Actors paradigm, Institutional Trust Theory, and Privacy Calculus Theory, we investigate how human-like AI design shapes cognitive and affective responses within Sierra Leone’s banking sector. Using survey data from 277 banking customers and partial least squares structural equation modeling, we find that AI anthropomorphism exhibits no direct association with adoption intention (β = −0.013, p = 0.760). Instead, its influence is entirely indirect—transmitted in parallel through perceived social presence (β = 0.144, 95% CI [0.062, 0.226]) and trust in the AI system (β = 0.139, 95% CI [0.068, 0.210]). Critically, customer skepticism—shaped by institutional fragility—functions as a boundary condition that substantially attenuates both pathways: among highly skeptical users (+1 SD), anthropomorphism’s conditional effect on social presence becomes non-significant (β = 0.098, p = 0.124) compared to low-skepticism users (β = 0.412, p < 0.001), while its effect on trust is reduced by more than half (β = 0.118 vs. 0.284). These findings identify a critical boundary condition on human-like AI design: in low-trust environments, anthropomorphism operates not as a standalone adoption driver but as a relational amplifier whose efficacy depends on foundational trust and is substantially weakened when skepticism is high. The study challenges universalist assumptions in human–AI interaction research and underscores the need for institutionally sensitive design approaches in fragile economies.

1. Introduction

Artificial intelligence (AI) is rapidly transforming financial services worldwide. From virtual assistants that guide loan applications to chatbots that resolve customer queries in real time, AI-driven interfaces are redefining how banks engage with clients (Bhatnagr & Rajesh, 2025). A growing body of research suggests that designing AI systems with human-like qualities such as voice intonation, empathetic language, or even a name can enhance user experience by triggering social responses (Peter et al., 2025; Truong & Chen, 2025). This design strategy, known as AI anthropomorphism, has been shown to increase perceived warmth, reduce psychological distance, and foster trust in high-income contexts (Blut et al., 2021). These findings largely emerge from technologically mature, high-trust societies—raising a critical question: Does anthropomorphism yield similar benefits in fragile, resource-constrained economies where institutional trust is low and digital skepticism is high?
Theoretical advances in human–AI interaction often rely on the Computers Are Social Actors (CASA) paradigm (Nass & Moon, 2000), which posits that users automatically apply social heuristics to machines that exhibit human cues. More recently, scholars have embedded CASA within the broader Stimulus–Organism–Response (S-O-R) framework (Mehrabian & Russell, 1974) to explain how external design features (stimuli) evoke internal cognitive-affective states (organism), which in turn shape behavioral intentions (response) (J. Li et al., 2025; Zhou & Ma, 2025). For instance, recent studies demonstrated that anthropomorphic cues in AI assistants increase perceived social presence—a core organismic state—which subsequently boosts continuance intention (Chae & Kim, 2026).
Similarly, Y. Li et al. (2024) found that robots with human-like appearances increase consumer trust through perceived warmth and competence, with effects varying across cultures and service contexts.
Despite these insights, a significant contextual and theoretical gap persists. Nearly all empirical studies on AI anthropomorphism have been conducted in WEIRD (Western, Educated, Industrialized, Rich, Democratic) societies (Greilich et al., 2025; T. Li et al., 2025) where digital infrastructure is robust, regulatory frameworks for AI are emerging, and citizens generally exhibit higher baseline trust in institutions. In stark contrast, Sub-Saharan Africa—home to some of the world’s most dynamic yet fragile digital finance ecosystems—remains conspicuously absent from this literature (Ofori-Okyere & Edghiem, 2026). While pioneering studies have examined Internet banking acceptance in South Africa (Maduku, 2016), research on anthropomorphic AI in banking across the region remains limited. This omission is problematic for two reasons. First, it risks theoretical overgeneralization: mechanisms that work in Zurich may fail in Freetown due to differences in digital literacy, cultural norms around authority, or historical experiences with institutional betrayal (Donner, 2015). Second, it overlooks a critical policy and design frontier: as financial institutions in Africa seek to leverage AI to expand financial inclusion—a promising pathway outlined by Akanfe et al. (2025)—they urgently need evidence-based guidance on how to design these systems to build trust and resonate, rather than alienate, users in low-trust, fragile environments like Sierra Leone.
Nowhere is this more urgent than in Sierra Leone, a nation navigating the complex legacy of civil conflict and public health crises, which represents a critical case study of digital transformation within a low-trust, post-crisis environment. The country’s banking sector is undergoing rapid digitization, with financial institutions piloting AI chatbots to reach unbanked populations (Kamara & Oppong, 2025; Kebe et al., 2024); however, customer adoption remains sluggish. Preliminary fieldwork suggests that while some users find human-like AI “friendly,” others express deep skepticism—questioning whether a machine can understand their needs or protect their data in a system with weak oversight. This tension points to a pivotal but unexamined boundary condition: individual skepticism toward AI, shaped by socio-institutional fragility, may fundamentally alter how anthropomorphic stimuli are processed internally
To address this gap, we ask:
  • How does AI anthropomorphism influence customers’ intention to adopt AI-driven banking services?
  • To what extent are the effects of AI anthropomorphism influenced by perceived social presence and trust in the AI system?
  • How does customer skepticism moderate the psychological pathways linking anthropomorphism to adoption intention?
Guided by the S-O-R framework, we conceptualize AI anthropomorphism as the stimulus (S), with perceived social presence (an affective-relational state) and trust in the AI system (a cognitive-evaluative state) operating as parallel organismic dimensions within the ‘O’ component. These concurrent psychological states jointly translate anthropomorphic stimuli into adoption intention (R), with skepticism acting as a dispositional moderator of both pathways. This parallel architecture reflects users’ adaptive dual-channel processing in fragile economies—simultaneously appraising relational warmth and institutional safety when evaluating AI systems.
However, this study makes three key contributions. First, it challenges the presumed universality of established human-AI interaction theories by demonstrating that anthropomorphism exerts no direct effect on adoption in fragile economies; its influence is entirely indirect, operating through perceived social presence and trust. This finding reveals that human-like design is not a standalone driver of adoption but a relational amplifier whose efficacy is contingent on foundational trust conditions—a critical boundary condition on CASA. Second, it introduces and empirically validates individual skepticism as a critical boundary condition, revealing a “trust duality” where human-like design can backfire in low-trust, high-skepticism environments. Third, it provides actionable, context-sensitive guidance for stakeholders, shifting the focus from design-centric to trust-centric AI deployment strategies that prioritize institutional accountability and user empowerment in the Global South.

2. Literature Review and Hypotheses

2.1. Theoretical Framework

This study is grounded in the Stimulus–Organism–Response (S-O-R) framework (Mehrabian & Russell, 1974), which posits that external cues evoke internal psychological states that shape behavioral outcomes. Critically, the organism (‘O’) comprises multiple concurrent states—not sequential stages. Mehrabian and Russell (1974) originally conceptualized pleasure, arousal, and dominance as simultaneous dimensions activated by stimuli.
Extending this logic, we theorize that AI anthropomorphism activates two parallel organismic dimensions in fragile economies: (1) an affective-relational channel (perceived social presence) reflecting interpersonal warmth and (2) a cognitive-institutional channel (trust) reflecting risk appraisal (Gefen & Straub, 2004; Mcknight et al., 2011). We propose this parallel architecture as contextually rational in Sierra Leone: when institutional safeguards are weak, users may not afford sequential processing that prioritizes relational connection before safety evaluation. Instead, we suggest they engage in dual-channel vigilance—simultaneously appraising whether the AI ‘feels human’ and whether it ‘can be trusted.’
We integrate three theories to explain this architecture. CASA (Nass & Moon, 2000) explains why anthropomorphism functions as a potent stimulus that triggers social attributions. Institutional Trust Theory (Mcknight et al., 2011) clarifies how trust emerges concurrently with social presence as users rely on relational signals as proxies for benevolence in low-regulation environments. Privacy Calculus Theory (Dinev & Hart, 2006) positions skepticism as a boundary condition that simultaneously attenuates both pathways when users perceive anthropomorphic cues as manipulative rather than benevolent. Together, these theories position anthropomorphism as a context-dependent stimulus whose efficacy hinges on institutional fragility and user skepticism within a parallel-processing S-O-R model.
Collectively, these frameworks operate at distinct analytical levels. The S-O-R framework provides the overarching structural architecture, mapping the causal sequence from design features to behavioral outcomes without prescribing the specific psychological content. CASA specifies the micro-psychological mechanism, explaining why anthropomorphic stimuli automatically trigger social heuristics rather than technical evaluations. Institutional Trust Theory contextualizes the organismic state, clarifying why trust must operate as a parallel pathway to social presence in fragile economies where relational signals substitute for weak regulatory safeguards. Finally, Privacy Calculus Theory defines the boundary condition, explaining when the process fails—specifically, when skeptical users recalibrate the risk–benefit analysis to view human-like cues as manipulative rather than benevolent. Together, this integrated lens moves beyond a generic application of S–O-R to offer a contextually situated explanation of AI adoption in low-trust environments.

2.2. Hypotheses Development and Conceptual Framework

2.2.1. AI Anthropomorphism as a Stimulus in Digital Service Environments

In human–AI interaction research, anthropomorphism, regarded as the attribution of human-like qualities such as voice, emotion, name, or intentionality to non-human agents, has emerged as a powerful design strategy for enhancing user engagement (Keating et al., 2025). Grounded in CASA, which predicts that human-like cues automatically trigger social responses, we hypothesize that AI anthropomorphism will positively influence both perceived social presence (the affective-relational outcome of social cue processing) and trust (the cognitive-evaluative outcome, as social heuristics also signal benevolence). Within the S-O-R framework, AI anthropomorphism functions as an external stimulus that elicits internal cognitive and affective states—the organism—which subsequently shape behavioral outcomes—the response (Pan et al., 2024). Empirical evidence consistently demonstrates that anthropomorphic cues activate social heuristics, leading users to perceive AI systems as warmer, more empathetic, and more responsive (Greilich et al., 2025; J. Li et al., 2025). In banking contexts where trust, personalization, and emotional safety are paramount, such design features have been shown to enhance Perceived Social Presence and mitigate anxiety associated with digital financial transactions (Chang et al., 2025). However, a primary mechanism through which anthropomorphism operates is perceived social presence—defined as the extent to which users experience interaction with a sentient, responsive entity rather than a passive technological tool (Tsekouras et al., 2024). When AI systems display human-like attributes, users are more likely to interpret the interaction as socially meaningful, thereby strengthening feelings of co-presence and interpersonal connection, and this effect is especially pronounced in service domains requiring emotional resonance (Y. Li et al., 2024). Accordingly, we hypothesize:
H1. 
AI anthropomorphism positively influences perceived social presence.
Beyond its influence on social presence, anthropomorphism may also have a trust-building effect. Trust in technology is often formed rapidly through surface-level cues, especially in low-involvement or high-uncertainty contexts. Anthropomorphic features such as a friendly tone, empathetic phrasing, or a personalized name serve as trust heuristics, signaling benevolence and competence without requiring users to engage in complex evaluations of system logic, data security, or algorithmic transparency (Calhoun et al., 2019). In banking, where perceived integrity is critical, these cues can directly elevate trust, even prior to the full development of social presence. A systematic literature review of 84 studies found that anthropomorphic chatbots generally produce positive outcomes, with humanlike communication styles and emotional characteristics enhancing trust, empathy, and social presence, though overly humanlike features can trigger privacy concerns and AI anxiety (Greilich et al., 2025). Similarly, Z. Wang et al. (2026) found that human-likeness in AI financial advisors positively affects adoption intention through consumer technology vulnerability, though this effect is diminished by higher self-efficacy and consumer innovativeness.
In Sierra Leone—a context marked by limited digital literacy and underdeveloped mental models of AI—users may rely heavily on such heuristic cues to assess reliability. Thus, anthropomorphism may function as a cognitive proxy for trustworthiness, amplifying its direct impact. We therefore propose:
H2. 
AI anthropomorphism positively influences trust in the AI system.
Furthermore, anthropomorphism can also shape behavioral intention through affective and normative mechanisms, independent of trust or social presence. Users may adopt AI-driven banking services simply because they find the interface likable, engaging, or culturally resonant—responses triggered automatically by human-like design (Bhatnagr et al., 2024). Research on human–machine interaction reveals that interface design and repeated exposure significantly influence user preferences and comfort. Tsekouras et al. (2024) demonstrated that more anthropomorphic conversational interfaces lead to increased interaction enjoyment and social presence, resulting in more positive user responses. In resource-constrained settings like Sierra Leone, where interpersonal warmth and relational familiarity are deeply embedded in service expectations, an AI that “speaks like a neighbor” may be embraced not because it is technically understood or fully trusted, but because it feels familiar, respectful, and socially appropriate. This aligns with von Schenk et al. (2025) which found that people form beliefs about machine interactions in self-serving ways, indicating that preferences can develop without conscious evaluation. Given the cultural and institutional context of Sierra Leone’s banking sector, this study proposes a similar direct association between anthropomorphic design and adoption behavior.
H3. 
AI anthropomorphism positively influences intention to adopt AI-driven banking services.

2.2.2. The Role of Perceived Social Presence

Perceived social presence—the subjective sense that one is interacting with a sentient, responsive entity rather than an impersonal machine—serves as a critical psychological mechanism in human–AI interaction (Gefen & Straub, 2004). Drawing on S-O-R, perceived social presence functions as a core organismic state that translates external design cues into behavioral responses. From a CASA perspective, this state is activated precisely because users apply social heuristics to anthropomorphic stimuli. In service contexts where relational quality shapes user experience, social presence bridges the gap between technological functionality and human-centered engagement (Y. Li et al., 2024). Kandampully et al. (2023) emphasize that managers must understand consumers’ emphasis on social presence alongside technological personalization, aesthetics, functionality, and interactivity while participating in value cocreation. However, having a strong sense of social presence enhances user comfort, reduces perceived transactional risk, and fosters emotional connection—all of which directly promote adoption intentions. When users feel accompanied by an AI agent, they are more likely to perceive the interaction as supportive, intuitive, socially valid and outperforming other activities like watching videos (De Freitas et al., 2025). This is especially consequential in banking, a domain characterized by high involvement, emotional sensitivity, and institutional vulnerability. Empirical studies confirm that higher social presence increases willingness to disclose financial information, follow AI advice, and continue using digital services (Bhatnagr et al., 2024), consequently, Riedel et al. (2022) demonstrated that consumers experience lower positive emotions, specifically affection, when receiving financial advice from AI compared to humans, with affection and trust serving as serial mediators affecting word-of-mouth and brand attitudes. Considering Sierra Leone’s context, where face-to-face banking remains culturally normative and digital interfaces may feel alienating, social presence can mitigate resistance by simulating interpersonal rapport. Users who perceive the AI as present and attentive are more likely to view it as a legitimate service partner—thereby increasing their intention to adopt. Thus, we propose:
H4. 
Perceived social presence positively influences intention to adopt AI-driven banking services.

2.2.3. The Role of Trust in AI

Trust—the willingness to be vulnerable to another party based on positive expectations of their behavior (Mayer et al., 1995)—is a cornerstone of technology adoption, especially in high-stakes domains such as financial services. Informed by Privacy Calculus Theory, trust operates as a risk-mitigating organismic state: in high-stakes banking contexts, users must believe the system will protect their data before adopting it. Within the S-O-R architecture, trust thus represents the cognitive-evaluative dimension of the organism. In the context of AI-driven banking, trust in the AI system reflects users’ beliefs that the technology is competent, benevolent, and operates with integrity (Königstorfer & Thalmann, 2020). Within the S-O-R framework, trust functions as a critical organismic state that translates cognitive and affective appraisals into behavioral intentions.
In digital finance, where decisions involve personal data, monetary risk, and long-term consequences, trust serves as a primary antecedent of user engagement. When customers believe an AI system will protect their information, provide accurate advice, and act in their best interest, they are significantly more likely to adopt and rely on it (Z. Wang et al., 2026). This relationship is robust across contexts as evidence reveals that trust entities show comparable paths to intention to use, with trust in technology being more important than trust in the provider when both are considered (Kuen et al., 2023). The historical institutional fragility and limited digital regulation in Sierra Leone elevate the importance of trust in AI adoption. Here, user hesitation frequently originates in practical concerns such as data security and fairness, rather than aversion to technology. Establishing trust reduces these perceived risks and facilitates adoption. Accordingly, we hypothesize:
H5. 
Trust in the AI system positively influences intention to adopt AI-driven banking services.

2.2.4. The Mediating Role of Perceived Social Presence and Trust in AI System Outcomes

Beyond its ability to influence intention to adopt AI-driven banking services, perceived social presence also serves as a key factor in the pathway from AI anthropomorphism to behavioral intention. Anthropomorphic design through voice, empathy, or conversational style does not influence behavior in isolation; rather, it activates the perception of social presence, which is associated with adoption (Cheng et al., 2022). Similarly, healthcare conversational agents demonstrate that anthropomorphism serves as one of multiple factors influencing acceptability, acceptance, and adoption among both patients and professionals. Meta-analytic evidence supports this mediation: Blut et al. (2021) found that social presence consistently mediates the effect of anthropomorphism on user satisfaction and behavioral outcomes across service robots and virtual agents. Similarly, Bhatnagr et al. (2024) demonstrated that in digital banking, anthropomorphism and perceived intelligence predict interaction quality, which significantly impacts expectation confirmation and continuous intention to use AI-enabled services. Given that users in fragile economies like Sierra Leone may rely more on relational heuristics than technical assessments when evaluating AI, perceived social engagement is likely to be a decisive factor in shaping behavioral intention. Consequently, we hypothesize that perceived social presence operates as an independent mediating pathway through which AI anthropomorphism is associated with adoption intention—reflecting the affective-relational channel activated by human-like design cues.
H6. 
Perceived social presence mediates the relationship between AI anthropomorphism and intention to adopt AI-driven banking services.
Further, Trust in the AI system operates as a distinct, independent mediating pathway through which AI anthropomorphism influences adoption intention—reflecting the cognitive-institutional channel through which users evaluate system benevolence and integrity. Critically, this pathway operates in parallel with (not subsequent to) the social presence pathway, as both represent distinct organismic dimensions within the S-O-R framework. While anthropomorphic cues (e.g., empathetic language, human-like voice) may initially enhance appeal, their ultimate value lies in fostering perceived trustworthiness. Human-like attributes signal benevolence and social alignment, leading users to infer that the AI “cares” about their welfare—a key dimension of trust in automated systems (Yu et al., 2025). Bhatnagr et al. (2024) demonstrated that in conversational banking agents, anthropomorphism increased adoption intention primarily through enhanced trust, even after controlling for usability and social presence. Similarly, Z. Wang et al. (2026) found that human-likeness in AI financial advisors positively affects adoption intention through reduced consumer technology vulnerability, though this effect is moderated by individual traits like self-efficacy and consumer innovativeness. Curtis et al. (2021) also found that empathy, relational behaviors, and realistic human-like avatars with medical attire enhance user experience with virtual health assistants, suggesting that carefully designed anthropomorphic features can improve engagement with AI-based advisory services. These findings align with institutional trust theory, which posits that in environments with weak formal safeguards, users rely on relational signals like human-likeness to assess reliability (Mcknight et al., 2011).
Considering Sierra Leone’s relational culture, where interpersonal trust often compensates for weak institutional confidence, an AI that feels human may be perceived as more accountable and less likely to exploit users. Consequently, we hypothesize:
H7. 
Trust in the AI system mediates the relationship between AI anthropomorphism and intention to adopt AI-driven banking services.

2.2.5. The Moderating Role of Skepticism

While AI anthropomorphism can enhance user perceptions through social and trust-based mechanisms, its effectiveness is not universal. Institutional Trust Theory suggests that in weak regulatory environments, trust is not automatically granted but filtered through skepticism born of institutional experience. Privacy Calculus Theory further predicts that skeptical users will perceive greater risks than benefits from anthropomorphic cues. Thus, skepticism moderates both pathways, attenuating the translation of anthropomorphism into social presence and trust. Individual differences like skepticism toward AI can significantly weaken these effects. Skepticism, defined as a dispositional tendency to doubt the motives, competence, or safety of artificial intelligence systems (da Costa Filho & da Costa Hernandez, 2025), functions as a critical boundary condition in the S-O-R chain. It shapes how users interpret and respond to anthropomorphic stimuli, especially with people prone to underestimating AI potential due to exponential growth bias and motivated reasoning (Meikle & Bonner, 2024). Consistent with S-O-R, both perceived social presence and trust function as mediating organismic states. CASA predicts that anthropomorphism activates these states, while Privacy Calculus Theory explains why trust—specifically—carries the cognitive risk appraisal necessary for adoption in banking contexts. Users high in skepticism are more likely to engage in counter-arguing or threat appraisal when exposed to human-like AI, interpreting warmth as manipulation and empathy as deception (Peter et al., 2025). This cognitive resistance weakens the psychological pathways through which anthropomorphism typically operates.
Perceived social presence arises when users accept anthropomorphic cues at face value, allowing themselves to experience the interaction as socially meaningful. Skeptical individuals may discount or reinterpret these cues as artificial or strategic, thereby blocking the sense of co-presence. For example, a friendly greeting from an AI chatbot might be perceived by a low-skepticism user as warm, but by a high-skepticism user as scripted or deceptive. This aligns with privacy calculus theory, which posits that users weigh perceived benefits against risks and skepticism heightens risk perception (Dinev & Hart, 2006). Empirical support comes from the mHealth domain, von Kalckreuth and Feufel (2023) found that intention to use mHealth apps is driven by perceived benefits, trust, and social norms, with attitude to privacy having a large inhibiting effect on perceived benefits. This skepticism is not unique to Sierra Leone. In Nigeria, (Okeke, 2025), documents similar distrust toward AI chatbots, citing data security concerns and linguistic mismatches as key barriers. Despite sophisticated deployments like First Bank’s Ada, adoption remains constrained by ‘trust deficits rooted in institutional experience’—directly paralleling our conceptualization of skepticism as institutionally shaped rather than merely technological. We therefore anticipate that higher skepticism will weaken the positive relationship between anthropomorphism and social presence.
H8. 
Skepticism negatively moderates the positive relationship between AI anthropomorphism and perceived social presence, such that the relationship is weaker when skepticism is high.
Similarly, skepticism disrupts the formation of trust in response to anthropomorphic design. Trust requires vulnerability, yet skeptical users are predisposed to attribute negative intent to AI systems, regardless of surface-level friendliness (Mcknight et al., 2011). A human-like voice or empathetic phrasing may even backfire, triggering suspicion that the AI is trying too hard to gain compliance. Brandizzi (2023) emphasizes that developing human-like AI communication requires understanding human-AI trust dynamics, as misalignment between AI language use and human reasoning can lead to unexpected behaviors. Also, in financial advising, anthropomorphism increased trust only among users with low AI skepticism; for highly skeptical users, the same cues had no effect or even reduced trust (Z. Wang et al., 2026). This reflects a motivated reasoning process: when users doubt the legitimacy of a technology, they scrutinize its signals more critically, rendering heuristic cues ineffective. Within Sierra Leone’s banking sector, historical institutional fragility fuels public distrust, which in turn heightens skepticism toward AI systems. Consequently, anthropomorphic design cannot fully overcome underlying doubts regarding data ethics, algorithmic bias, or organizational intent. Instead, its ability to foster trust is contingent upon pre-existing skepticism. Hence, we hypothesize:
H9. 
Skepticism negatively moderates the positive relationship between AI anthropomorphism and trust in the AI system, such that the relationship is weaker when skepticism is high.

2.3. Synthesis of Hypotheses

The model framework (see Figure 1) integrates the CASA paradigm, Institutional Trust Theory, and Privacy Calculus Theory within the S-O-R framework to explain AI adoption in Sierra Leone’s banking sector. CASA establishes dual pathways: AI anthropomorphism (stimulus) simultaneously activates two parallel organismic dimensions—perceived social presence (affective-relational) and trust (cognitive-institutional)—which jointly drive adoption intention (response). This parallel architecture is not a methodological artifact but a contextually rational response to institutional fragility: in low-trust environments, users cannot afford sequential processing that prioritizes warmth before safety. Instead, they engage in dual-channel appraisal—simultaneously evaluating relational connection and institutional risk.
However, in contexts of institutional fragility and weak data governance, skepticism, shaped by Privacy Calculus and Institutional Trust Theory, acts as a contextual filter. It determines whether human-like cues are seen as benevolent or manipulative. The study proposes a trust duality: anthropomorphism strengthens adoption when skepticism is low but may backfire when skepticism is high. This integrated view captures not only how AI design influences behavior, but when it succeeds, thereby highlighting that effective AI deployment in fragile economies requires more than good design; it demands alignment with local socio-institutional realities.

3. Methodology

3.1. Research Strategy and Design

This study adopts a quantitative, cross-sectional survey design to test a moderated mediation model grounded in the S-O-R framework. The design is appropriate for examining complex psychological mechanisms such as the roles of perceived social presence, trust, and skepticism in shaping behavioral intentions toward AI-driven banking services (Podsakoff et al., 2003). Given the novelty of AI interfaces in Sierra Leone’s financial sector, a survey approach enables systematic measurement of latent constructs that cannot be directly observed but are reliably captured through validated self-report scales (Hair et al., 2021).

3.2. Population and Sampling Procedure

The target population consists of adult banking customers (aged ≥ 18) in Sierra Leone who have either used, been exposed to, or at least heard about AI-driven banking services—such as chatbots, virtual assistants, or AI-powered SMS alerts—within the past 12 months. Given the nascent stage of AI adoption in the country’s financial sector, we included individuals with even minimal awareness (e.g., having seen a demonstration or received promotional information) to ensure meaningful cognitive engagement with the survey constructs.
Participants were selected using a stratified random sampling approach across four major urban areas representing all four administrative regions of Sierra Leone: Freetown (Western Area), Makeni (Northern Province), Bo (Southern Province), and Kenema (Eastern Province). These locations collectively account for the majority of formal banking activity in the country (Ministry of Finance—Government of Sierra Leone, 2023). Within each city, we randomly selected branches from a mix of domestic and foreign-based commercial banks. At each branch, research assistants approached customers during non-peak hours and screened them for eligibility using a brief verbal questionnaire. Eligible and consenting individuals were then invited to participate in the survey.
Data were collected between August and December 2025 using via KoboToolbox, a secure digital data collection platform. This method accommodated varying literacy levels by allowing trained enumerators to read questions aloud in English or Krio while recording responses.
To determine the minimum sample requirements, an a priori power analysis was conducted using G*Power 3.1 for multiple regression (α = 0.05, power = 0.95, medium effect size f2 = 0.15). This analysis indicated a minimum of 276 respondents to detect the direct effects among six predictors. To account for potential attrition, we targeted 350 eligible participants, ultimately securing 277 complete responses (89.4% completion rate).
While this power analysis establishes a baseline for detecting main effects, moderated mediation models introduce interaction terms and indirect effects, which typically exhibit smaller effect sizes and require scrutiny of model complexity. To ensure the sample was adequate for the proposed analytical strategy, we evaluated it against Structural Equation Modeling (SEM) criteria. The final sample of 277 satisfies three critical requirements: (1) it exceeds the N ≥ 200 threshold recommended by Hair et al. (2021). for PLS-SEM to obtain stable path estimates with medium effects; (2) it maintains a robust 9.2:1 ratio of respondents to estimated parameters (30 parameters), which exceeds the 5:1 minimum recommended for SEM and mitigates the risk of overfitting; and (3) following recommendations by (Aguinis et al., 2013) for detecting interaction effects, the sample provides sufficient statistical power to detect moderation effects of medium-to-large magnitude. Therefore, the sample is deemed adequate for the complexity of the proposed moderated mediation model.

3.3. Measurement Scales

All constructs were measured using 5-point Likert scales (1 = strongly disagree, 5 = strongly agree) and adapted from validated instruments (see Appendix A). AI Anthropomorphism was defined as the perception that an AI-driven banking interface exhibits human-like qualities such as voice, empathy, or intentionality. A 5-item scale was adapted from (Bartneck et al., 2009). Perceived Social Presence was measured with a 4-item scale from (Gefen & Straub, 2004), capturing the sense of interacting with a sentient entity. Trust in the AI System employed a 5-item scale adapted from (Chowdhury et al., 2022), reflecting competence, benevolence, and integrity. Intention to Adopt AI-Driven Banking Services used a 5-item scale from (Liu et al., 2022). Skepticism toward AI was measured with a 3-item scale adapted from (Zhang et al., 2016) and contextualized for financial services in low-trust environments.
All scales underwent a three-stage cultural adaptation process: (1) expert panel review for linguistic/relational/technological grounding; (2) cognitive pre-testing (N = 24) with think-aloud protocols to identify misinterpretations; and (3) pilot testing (n = 30) with EFA validation; For example, the anthropomorphism item ‘has a personality’ was revised to ‘communicates in a friendly, human-like way’ after cognitive interviews revealed spiritual misinterpretations; trust items were reframed to acknowledge Sierra Leone’s weak regulatory environment (e.g., ‘even though data laws are weak here, this AI tries to protect my information’). All items were administered in Krio with enumerator clarification protocols to ensure construct equivalence (Brislin, 1970).

3.4. Data Collection and Analysis

Anonymous surveys were administered to customers at commercial banks across four urban centers in Sierra Leone: Freetown, Makeni, Bo, and Kenema. Eligibility required participants to be adults (≥18 years) with prior exposure to AI-driven banking services (e.g., chatbots, virtual assistants, or AI-powered alerts) in the preceding 12 months. To mitigate common method bias and social desirability, item order was randomized, and trained research assistants conducted the surveys. A pilot test (n = 30) confirmed the clarity and cultural appropriateness of the instrument in both English and Krio, with an average completion time of 12 min. Missing data were minimal (<1.5% per item) and handled using series mean imputation after confirming data were missing completely at random (MCAR) via Little’s test: χ2 = 31.24, p = 0.132.
Data were analysed using Partial Least Squares Structural Equation Modeling (PLS-SEM) in SmartPLS 4.0. This approach was selected for three methodologically grounded reasons aligned with the study’s aims. First, the research extends rather than tests established theory—applying the S-O-R and CASA frameworks to Sierra Leone’s fragile banking context, where measurement models developed in WEIRD settings may not transfer cleanly. PLS-SEM’s component-based estimation accommodates such uncertainty, whereas covariance-based SEM assumes perfect a priori measurement specification. Second, the study’s predictive orientation—centred on whether and how AI adoption drives usage intention—prioritises out-of-sample prediction (maximising R2 and Q2 predict), which PLS-SEM optimises. Third, the field data exhibited non-normality (e.g., skewness = 1.82, kurtosis = 3.47 for skepticism); PLS-SEM with bootstrapping is robust to non-normal distributions without requiring large samples (N > 500). Collectively, these considerations render PLS-SEM methodologically appropriate for technology adoption research in Global South contexts, where theory extension, predictive focus, and measurement flexibility are paramount.
The measurement model was evaluated for reliability (Cronbach’s α > 0.80; composite reliability ρₐ > 0.85), convergent validity (AVE > 0.50), and discriminant validity (HTMT < 0.85; Fornell–Larcker criterion satisfied). The structural model was estimated using 5000 bootstrap subsamples to generate bias-corrected 95% confidence intervals for direct, indirect (mediation), and conditional (moderated mediation) effects. Simple slopes were plotted to visualize interaction effects at ±1 SD of skepticism. Model quality was assessed using R2 (variance explained), Cohen’s f2 (effect size), Stone–Geisser’s Q2 (predictive relevance via blindfolding, omission distance = 7), and SRMR (<0.08, indicating good overall fit).
Eight control variables were included to account for demographic and contextual heterogeneity: age, gender, education, income, prior AI experience, bank type, smartphone proficiency, and region. Controls were selected based on (a) theoretical relevance to technology adoption in fragile economies (Venkatesh et al., 2003; Donner, 2015) and (b) significant bivariate correlations (p < 0.05) with at least one endogenous construct in preliminary analysis.

4. Results

4.1. Respondents’ Descriptive Analysis

The sample (N = 277) comprised 57.8% female respondents, predominantly aged 26–45 years (71.5%). Educational attainment was high (64.6% bachelor’s degree or higher). All owned smartphones—an eligibility criterion ensuring engagement with the AI stimulus. Most had prior AI banking exposure (68.2% users; 31.8% aware non-users). Foreign commercial banks were preferred (78.3%), consistent with institutional void theory (North, 1990). Geographically, 54.2% resided in Freetown, with 45.8% distributed across three provincial capitals (Makeni, Bo, Kenema), providing urban–provincial coverage representative of Sierra Leone’s digital banking landscape (see Table 1).

4.2. Assessment of Measurement Model

The measurement model was evaluated for reliability, convergent validity, and discriminant validity using guidelines for Partial Least Squares Structural Equation Modeling (PLS-SEM) (Hair et al., 2021). As shown in Table 2, all constructs demonstrate strong internal consistency and item coherence. Composite reliability (CR) values range from 0.774 (Skepticism) to 0.904 (Intention to Adopt), exceeding the recommended threshold of 0.70. Cronbach’s alpha (CA) values similarly exceed 0.70 for all constructs, confirming scale reliability (see Table 2). Convergent validity is supported by factor loadings and average variance extracted (AVE). All indicator loadings are above 0.70 (ranging from 0.721 to 0.906), and AVE values exceed the 0.50 benchmark, with the lowest at 0.608 (AI Anthropomorphism) and the highest at 0.713 (Intention to Adopt). This indicates that each construct explains more than 50% of the variance in its associated items (see Table 2). Discriminant validity was assessed using two criteria. First, the Fornell–Larcker criterion (Table 3) shows that the square root of each construct’s AVE (diagonal values) exceeds its correlations with all other constructs. Second, the Heterotrait–Monotrait ratio (HTMT) (Table 4) confirms discriminant validity, as all HTMT values are below the conservative threshold of 0.85. Collectively, these results confirm that the measurement model exhibits strong reliability, convergent validity, and discriminant validity, satisfying prerequisites for robust structural model estimation.

4.2.1. Structural Model Predictive Power

The structural model demonstrates strong explanatory and predictive capability (Table 5). The endogenous constructs exhibit substantial variance explained (R2): INT achieves an R2 of 0.621 (adjusted R2 = 0.616), indicating that 62.1% of its variance is accounted for by AIA, PSP, and TAI. Similarly, PSP (R2 = 0.623) and TAI (R2 = 0.515) are robustly predicted by AIA, reflecting theoretically grounded linkages. Predictive relevance was confirmed via Stone–Geisser’s Q2 predict (blindfolding, omission distance = 7), with all constructs exceeding the threshold of zero (INT: 0.490; PSP: 0.581; TAI: 0.497), affirming meaningful out-of-sample predictive power (Hair et al., 2021). Prediction errors remain moderate (RMSE ≤ 0.729; MAE ≤ 0.586), consistent with behavioral modeling in resource-constrained contexts. Effect size analysis (f2) further reveals that AIA exerts a medium–large influence on PSP (f2 = 0.224) and a small–medium influence on TAI (f2 = 0.135), while trust demonstrates the strongest proximal effect on adoption intention (f2 = 0.214) relative to social presence (f2 = 0.173). Critically, the negligible effect of AIA on INT (f2 = 0.000) confirms full mediation—anthropomorphism is associated with adoption exclusively through parallel psychological pathways, with no direct behavioral effect. Collectively, these metrics validate a theoretically coherent and contextually grounded model of AI adoption in Sierra Leone’s banking environment.

4.2.2. Common Method Bias (CMB)

To address the potential threat of common method bias inherent in self-reported, single-source data, we implemented a combination of procedural remedies and statistical diagnostics in line with contemporary recommendations (Podsakoff et al., 2012). Procedurally, we (1) psychologically and procedurally separated predictor and criterion variables within the same survey session through item randomization, insertion of neutral filler items, and varied question framing to reduce recall consistency and acquiescence bias; (2) guaranteed respondent anonymity to reduce evaluation apprehension; (3) incorporated reverse-coded items to mitigate acquiescence bias; and (4) employed neutrally worded items to minimize social desirability. Statistically, we conducted three complementary assessments. First, Harman’s single-factor test revealed that a single unrotated factor accounted for 47% of the total variance, falling below the conventional 50% threshold. Second, we applied the marker variable technique using “smartphone satisfaction” as a theoretically unrelated marker; the average correlation change after partialling out the marker covariance was minimal (Δr = 0.052), below the 10% threshold indicative of substantial bias. Third, all construct correlations remained below 0.85, and all inner-model variance inflation factors (VIFs) were comfortably under the conservative cutoff of 3.3 (see Table 2), suggesting that structural estimates are not distorted by collinearity arising from CMB. While common method bias cannot be entirely ruled out in cross-sectional designs, this triangulated approach suggests it is unlikely to substantively influence the reported relationships. We acknowledge this limitation and temper any causal inferences accordingly.

4.3. Hypothesis Results

For the study’s hypothesized model, Table 6, Table 7 and Table 8 display the estimated PLS-SEM path coefficients and hypothesis test results. A bootstrapping procedure with 5000 resamples was used to assess all direct, indirect, and moderated effects. This approach was selected for its robustness in generating bias-corrected confidence intervals and accurate significance estimates, particularly for testing mediation (H6 & H7) and moderation (H8 & H9).

4.3.1. Direct Effects

As shown in Table 6, AI anthropomorphism significantly increases perceived social presence (β = 0.354, p < 0.001) and trust (β = 0.312, p < 0.001), supporting H1 and H2. However, it has no direct effect on adoption intention (β = −0.013, p = 0.760). This non-significant direct effect, while leading to rejection of H3, is itself theoretically meaningful: it demonstrates that anthropomorphism does not influence adoption directly but operates entirely through internal psychological states. Combined with the significant indirect effects reported below, this is consistent with full mediation—a finding that challenges universalist assumptions about anthropomorphic design. Both perceived social presence (β = 0.406, p < 0.001) and trust (β = 0.445, p < 0.001) significantly predict adoption intention, supporting H4 and H5.

4.3.2. Mediation Effects

Both mediation hypotheses are supported (Table 7). Perceived Social Presence (PSP) and Trust in the AI System (TAI) operate as parallel mediators, transmitting the influence of AI anthropomorphism (AIA) to adoption intention (INT) through two concurrent psychological pathways. The specific indirect effects are positive and statistically significant: AIA → PSP → INT (β = 0.144, 95% CI [0.062, 0.226], p = 0.001) and AIA → TAI → INT (β = 0.139, 95% CI [0.068, 0.210], p < 0.001). Critically, the direct effect of AIA on INT is non-significant (β = −0.013, p = 0.760), indicating that anthropomorphism influences adoption exclusively through these internal psychological states—with no residual direct pathway.

4.3.3. Moderation Effects

As shown in Table 8, skepticism significantly moderates both pathways. The interaction between skepticism and AI anthropomorphism negatively predicts Perceived Social Presence (β = −0.268, p < 0.001) and trust in AI (β = −0.090, p = 0.001), supporting H8 and H9. Simple slope analyses (Figure 2) confirm that among low-skepticism users, AI anthropomorphism strongly predicts perceived social presence (β = 0.412, p < 0.001) and trust (β = 0.284, p < 0.001); among high-skepticism users, these effects are markedly attenuated—perceived social presence becomes non-significant (β = 0.098, p = 0.124), and trust is more than halved (β = 0.118, p = 0.042). In fragile economies like Sierra Leone, where institutional distrust fuels skepticism, these findings underscore that AI deployment alone is insufficient; it must be paired with transparency, data protection, and digital literacy initiatives to realize its full potential. Further, Skepticism significantly moderates the indirect effect via perceived social presence (Index = −0.109, 95% CI [−0.187, −0.042]), but not via trust (Index = −0.040, 95% CI [−0.081, 0.003]), indicating the cognitive pathway is more robust to skepticism—consistent with Privacy Calculus Theory (see Table 9).

4.3.4. Robustness Analysis

Robustness analyses confirm that the inclusion of control variables (age, gender, education, income, prior AI experience, bank type, smartphone proficiency, and region), did not substantively alter the direction, significance, or magnitude of hypothesized relationships (Δβ < 0.02; see Table 10).
The structural model (see Figure 3) confirms perceived social presence and trust in the AI system as key mediators between AI anthropomorphism and intention to adopt AI-driven banking services. Trust in AI also directly impacts adoption intention, while AI anthropomorphism does not exert a significant direct effect. A key finding is that skepticism negatively moderates both the AIA–PSP and AIA–TAI relationships, indicating that the effectiveness of anthropomorphic design is significantly diminished among highly skeptical users.

5. Discussion

This study investigates how AI anthropomorphism shapes customer adoption of AI-driven banking services in Sierra Leone—a fragile economy marked by institutional fragility, emerging digital infrastructure, and deep-seated relational norms in financial service delivery. By integrating the Stimulus–Organism–Response (S-O-R) framework with the Computers Are Social Actors (CASA) paradigm, Institutional Trust Theory, and Privacy Calculus Theory, we analyze direct, mediated (via perceived social presence and trust), and moderated (by skepticism) effects. This study investigates the psychological pathways linking AI anthropomorphism to adoption intention in Sierra Leone. The findings provide associational evidence that challenges the universal applicability of human-like AI design principles and offers context-specific insights that can inform future research and design thinking for AI deployment in fragile economies.

5.1. AI Anthropomorphism and Adoption Intention

The strong, positive associations of AI anthropomorphism on perceived social presence and trust in the AI system are consistent with the CASA paradigm, which posits that users automatically apply social heuristics to machines exhibiting human-like cues—even when aware of their artificial nature (Nass & Moon, 2000). This aligns with meta-analytic evidence demonstrating that anthropomorphic design reliably enhances social presence and trust across service contexts (Blut et al., 2021). In Sierra Leone’s interpersonal banking culture where face-to-face rapport remains normative, anthropomorphic features such as empathetic language or a personalized voice help bridge the emotional gap between impersonal technology and human-centered service expectations (T. Li et al., 2025) This finding resonates with recent works of (Bhatnagr et al., 2024; Schrank, 2025), who argues that Anthropomorphic AI features, including perceived intelligence and human-like characteristics, enhance interaction quality and predict continued usage intentions in digital banking services.
Conversely, the non-significant direct effect of AI anthropomorphism on adoption intention (β = −0.013, p = 0.760) is arguably the study’s most theoretically significant finding. It starkly contrasts with studies in WEIRD contexts that report consistent direct links (Kumar et al., 2025), where anthropomorphic cues appear to influence behavior directly without requiring full cognitive mediation. The absence of a direct effect in Sierra Leone reveals a critical boundary condition on CASA: in fragile economies marked by institutional fragility and weak data governance, surface-level social heuristics are insufficient to drive adoption. Users do not adopt AI because it ‘feels human’; they require that this humanity be translated into verifiable psychological states—connection (social presence) and assurance (trust). This pattern positions anthropomorphism not as a direct persuasive tool but as a relational amplifier whose behavioral influence is contingent on deeper cognitive-affective processing. The null direct effect, therefore, is not a disconfirmed expectation but evidence that in low-trust environments, design stimuli must be validated through internal appraisal before shaping behavior. Thus, our results challenge the assumed universality of CASA and demonstrate that in low-trust environments, surface-level heuristics must be validated through deeper cognitive appraisal before influencing behavior.

5.2. The Dual Mediating Pathways: Social Presence and Trust

The findings robustly confirm perceived social presence and trust in the AI system as essential psychological mediators. Both constructs fully transmit the influence of anthropomorphism to adoption intention. Social presence operates as the initial affective gateway: when users feel they are interacting with a sentient, responsive entity, anxiety around digital transactions diminishes (Vazquez et al., 2023). Trust emerges as the stronger proximal driver, reflecting the high-stakes nature of banking where benevolence and data integrity outweigh mere co-presence. Most significantly, the mediation analysis reveals that AI anthropomorphism is associated with adoption exclusively through these internal states. This positions social presence and trust not as secondary outcomes but as the core transmission mechanisms through which design stimuli become behavioral intent. Given the limited technical schemas available to users in Sierra Leone for evaluating AI, such relational cues act as essential cognitive heuristics. An AI that “listens” and “protects” satisfies deep-seated expectations of financial stewardship, converting anthropomorphic cues into actionable trust.

5.3. The Contingent Role of Skepticism

The moderation analysis delivers the study’s most contextually profound insight: skepticism functions as a powerful boundary condition that reconfigures the efficacy of anthropomorphic design. The negative interactions between skepticism and anthropomorphism on both social presence and trust confirm that in environments with weak data governance and historical institutional distrust, users engage in threat appraisal rather than social acceptance. The results further reveal that among low-skepticism users, anthropomorphism strongly enhances social presence and trust; among high-skepticism users, these effects are weakened by 83% and 50%, respectively. Critically, however, the index of moderated mediation indicates that this attenuation significantly disrupts the indirect pathway to adoption only for social presence, whereas the trust pathway remains partially robust. Specifically, under high skepticism, the indirect effect via social presence becomes non-significant (β = 0.035, 95% CI [−0.012, 0.098]), while the indirect effect via trust remains significant albeit reduced (β = 0.099, 95% CI [0.041, 0.172]).
This pattern aligns with Privacy Calculus Theory (Dinev & Hart, 2006), suggesting that skeptical users engage not in social attribution but in threat appraisal, interpreting human-like cues not as benevolent but as potentially manipulative. Consequently, anthropomorphism’s relational benefits are contingent on a foundational level of user trust—a condition that cannot be engineered through interface design alone in contexts marked by systemic institutional distrust. Skepticism in Sierra Leone reflects a contextually rational stance, given uneven digital literacy and emerging regulatory frameworks. Consequently, anthropomorphism’s effectiveness is contingent not merely on design quality but on the readiness of the socio-institutional context to support trust.

6. Implications of the Study

6.1. Theoretical Implications

This study advances human–AI interaction theory in three interlocking ways.
First, this study advances human–AI interaction theory by establishing institutional fragility as a critical boundary condition for prevailing AI adoption models. First, we reframe AI anthropomorphism not as a universal design heuristic—as often assumed in Western-derived literature—but as a context-contingent stimulus whose efficacy is conditional on socio-institutional infrastructure. While prior research largely situated in WEIRD contexts treats anthropomorphism as a direct catalyst for engagement (J. Wang et al., 2025), our findings demonstrate that in fragile economies, its influence is fully mediated by psychological states (social presence and trust) and severely weakened by skepticism. This challenges the implicit universality of the CASA paradigm (Nass & Moon, 2000), revealing its limits when users lack the institutional framework to interpret social cues as benevolent. In doing so, we extend the S-O-R framework beyond its origins into domains of institutional voids, where stimuli are filtered through risk calculus rather than social heuristics.
Second, the findings suggest a more complex operation of the ‘organism’ in the S-O-R framework than is often depicted in WEIRD-context studies. In Sierra Leone, anthropomorphism appears to simultaneously activate two distinct psychological pathways—an affective-relational one (social presence) and a cognitive-institutional one (trust). This pattern is consistent with the idea of dual-channel appraisal, a hypothesized adaptive response in contexts of institutional fragility. We acknowledge, however, that this dual-channel interpretation is inferential based on our cross-sectional data and would require experimental or process-tracing validation to confirm the cognitive temporality. Future research could experimentally test whether this represents truly parallel processing or a rapid, sequential evaluation.
Third, the findings point to a potential duality in the formation of trust in fragile economies. The strong moderating role of skepticism suggests that trust is not simply granted based on surface-level cues but is actively ‘negotiated’ by the user, who weighs relational signals against perceived institutional risks. In Sierra Leone’s low-trust banking environment, trust is not granted—it is negotiated. Anthropomorphic cues only foster trust when users perceive them as authentic signals of accountability, not as persuasive tactics. This synthesis of Institutional Trust Theory (Zucker, 1986), which posits that in weak-institution settings, trust substitutes for formal safeguards, and Privacy Calculus Theory (Dinev & Hart, 2006), which frames technology adoption as a risk-benefit trade-off, reveals that trust in AI is not merely cognitive or affective, but institutionally embedded. When regulatory oversight is weak, users demand more than likability; they require verifiable integrity. Trustworthy AI systems require comprehensive requirements, including explainability, accountability, robustness, and human oversight to ensure safety and reliability before deployment (Alzubaidi et al., 2023). This concept of a ‘negotiated trust’, where user skepticism acts as a key filter, emerges as a central insight from our exploratory analysis and warrants further investigation.
Fourth, this study directly responds to enduring calls for contextual theorizing in information systems research (Grover & Lyytinen, 2023). By demonstrating that CASA mechanisms appear attenuated under conditions of high skepticism, we highlight the potential epistemic bias of importing Western theories into Global South contexts without adaptation. In post-conflict societies, findings suggest users may not passively accept social cues; rather, they appear to interrogate them through a lens of structural vulnerability. Consequently, AI adoption is not a matter of interface aesthetics but of relational legitimacy—a process requiring transparency, data sovereignty, and co-designed governance (Couldry & Mejias, 2019). This reframing shifts the theoretical focus from user-centered design to contextually accountable design, aligning with recent decolonial critiques of AI ethics that emphasize power, equity, and overlooking African philosophies (van Norren, 2023). This suggests that effective AI deployment in fragile economies requires not just user-centered design but ‘contextually accountable design’ that acknowledges technology is never neutral in settings marked by colonial technological legacies (Wakunuma et al., 2025). Collectively, these contributions move the field beyond universalist assumptions toward a situated, institutionally aware theory of human–AI interaction—one that acknowledges that in fragile economies, technology is never neutral, and design is never enough.

6.2. Practical Implications

The practical implications of this study call for a fundamental shift in strategy from a design-centric to a trust-centric approach for deploying AI-driven banking services in fragile economies like Sierra Leone. For financial institutions in fragile economies, the findings suggest that building foundational trust may be a prerequisite for anthropomorphic design to be effective. This points to a potential strategy of prioritizing transparency and reliability in initial AI deployments, with human-like features introduced later to enhance, rather than create, user trust. This is an associational insight that would benefit from experimental validation. The strong moderating effect of skepticism raises the possibility that different user segments may respond differently to anthropomorphic design. Banks could explore this further by conducting A/B testing to see if less skeptical users indeed show a stronger preference for human-like interfaces, while more skeptical users respond better to functional, transparent designs. Our study provides the initial hypothesis for such tests. For AI developers, the findings highlight the importance of context-sensitive design. Allowing users some control over the interface’s social cues (e.g., toggling between a simple, text-based interaction and a more conversational one) could be a way to accommodate varying levels of user skepticism. This design idea is one potential implication of our data. Finally, International development partners may complement technology transfer with investments in localized trust infrastructure—such as independent AI audit mechanisms and contextually adapted consumer protection frameworks. Our associational findings suggest anthropomorphism’s efficacy depends on pre-existing trust conditions; however, longitudinal or experimental validation is needed before translating this insight into policy. Context-sensitive co-development of trust infrastructure alongside technological solutions could offer a more sustainable pathway for AI adoption in fragile economies. Ultimately, this study underscores that in low-trust environments, anthropomorphism is merely an amplifier of trust, not a generator of it; sustainable adoption depends on first addressing the institutional and experiential roots of skepticism through transparency, accountability, and proven integrity.

6.3. Research Limitations and Future Research

This study has several limitations. First, the cross-sectional design precludes causal inference regarding temporal dynamics of trust formation; longitudinal or experimental designs are needed to establish directionality and test whether skepticism attenuates with sustained AI exposure. Second, the sample overrepresents urban, digitally literate customers with high education and smartphone ownership, limiting generalizability to rural, unbanked populations central to financial inclusion. These groups may use different access channels (e.g., USSD), rely on community-based trust, and face infrastructural constraints, suggesting the identified psychological pathways may not hold. Future research requires inclusive sampling strategies targeting these populations. Third, despite procedural safeguards and statistical controls, single-source cross-sectional data remain susceptible to common method bias. While triangulated CMB tests suggest minimal distortion, residual bias cannot be ruled out. Fourth, findings reflect Sierra Leone’s specific post-conflict context and may not generalize across fragile economies with different institutional voids or cultural trust schemas. Fifth, anthropomorphism was measured as a perceived attribute based on prior exposure rather than a manipulated stimulus, introducing heterogeneity. While this enhances ecological validity, it precludes causal claims about specific design features. Consequently, our findings address psychological mechanisms linking perceived anthropomorphism to adoption, not design-level interventions. Finally, while cultural adaptation enhanced ecological validity, embedding institutional references in trust items (e.g., ‘Even though data laws are weak here…’) may have primed respondents to link trust directly to institutional fragility, potentially inflating the observed association between skepticism and trust due to shared contextual cues; subsequent inquiry should employ context-neutral items to disentangle construct-specific variance from contextual priming effects.
Future research should (1) use experimental designs to isolate which anthropomorphic cues most effectively activate trust and when they backfire; (2) adopt longitudinal designs tracking trust calibration over time and following service failures; (3) employ experience sampling to capture real-time responses; (4) adopt inclusive sampling via low-bandwidth interfaces (USSD, voice-only) reaching marginalized rural users; (5) integrate behavioral metrics (e.g., usage logs) with surveys to bridge intention–behavior gaps; (6) conduct comparative studies across Global South contexts to distinguish universal from culturally contingent adoption pathways; and (7) future experimental research should employ standardized stimuli to isolate which anthropomorphic cues effectively activate trust or trigger skepticism.

7. Conclusions

This study delineates critical boundary conditions for the broader AI adoption literature. We find that in Sierra Leone’s fragile banking context, AI anthropomorphism is associated with adoption only indirectly—through perceived social presence and trust—with no direct effect. This absence of a direct effect is not a null finding but a theoretically significant discovery: it suggests that in low-trust environments, human-like design cannot bypass users’ cognitive-affective appraisal systems. Anthropomorphism functions not as a direct persuasive cue but as a relational amplifier whose behavioral influence is entirely mediated by the psychological states it activates. These findings challenge the assumed universality of human-like AI design, revealing it as a context-contingent strategy that requires foundational trust to succeed. To our knowledge, our study is among the first to examine AI anthropomorphism in banking chatbots within Sub-Saharan Africa’s fragile economies—and the first in Sierra Leone—offering exploratory, context-specific insights into the perceived relational benefits and limitations of human-like AI.
Critically, consumer skepticism sharply weakens these pathways: anthropomorphism’s impact on social presence drops by 83% and on trust by 50% among highly skeptical users. These findings challenge the assumed universality of human-like AI design, revealing it as a context-contingent strategy that requires foundational trust to succeed. To our knowledge, our study is among the first to examine AI anthropomorphism in banking chatbots within Sub-Saharan Africa’s fragile economies—and the first in Sierra Leone— offering exploratory, context-specific insights into the perceived relational benefits and limitations of human-like AI. The findings underscore the need for banks, regulators, and designers to consider the deep-seated role of skepticism in shaping user responses, highlighting that ethical and effective AI deployment in low-trust environments requires more than just good design.

Author Contributions

Conceptualization, A.C.D.M. and L.Z.; methodology, A.C.D.M.; software, I.A.K.; validation, A.C.D.M. and L.Z.; formal analysis, A.C.D.M.; investigation, A.C.D.M.; data curation, A.C.D.M. and I.A.K.; writing—original draft preparation, A.C.D.M.; writing—review and editing, A.C.D.M. and L.Z.; visualization, A.C.D.M. and I.A.K.; supervision, L.Z.; project administration, A.C.D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki. Ethical oversight for this minimal-risk, anonymous survey research was provided by the Research Ethics Committee of the School of Economics and Management, Beijing Jiaotong University, which granted a waiver for formal approval (15 March 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Measurement Scales

VariableItemsSource
AI AnthropomorphismThe AI assistant seems to have human-like qualities.(Bartneck et al., 2009)
The way the AI speaks sounds warm and friendly, like a person.
The AI assistant appears to understand my emotions.
The AI has a personality that feels real.
The AI explains decisions in plain language you understand.
Perceived Social PresenceI feel like I am interacting with a real person.(Gefen & Straub, 2004)
The AI assistant gives me a sense of being “together” during our interaction.
I feel that the AI is attentive to what I say.
The interaction feels socially engaging, not mechanical.
Trust in the AI SystemEven though data laws are weak here, this AI tries to protect my information.(Chowdhury et al., 2022)
The AI is competent in handling my banking needs.
I feel confident that the AI will act in my best interest.
The AI is honest and truthful in its responses.
I can rely on the AI to perform as expected.
Intention to Adopt AI-Driven Banking ServicesI intend to use AI-driven banking services in the future.(Liu et al., 2022)
I plan to rely on AI for my banking inquiries.
I would recommend using AI banking services to others.
I will continue using AI banking services if available.
I am willing to adopt new AI features offered by my bank.
Skepticism Toward AII doubt that an AI can truly understand my financial needs.(Zhang et al., 2016)
I am suspicious about how my data is used by AI banking systems.
I believe AI banking services are more about corporate control than customer benefit.

References

  1. Aguinis, H., Gottfredson, R. K., & Joo, H. (2013). Best-practice recommendations for defining, identifying, and handling outliers. Organizational Research Methods, 16(2), 270–301. [Google Scholar] [CrossRef]
  2. Akanfe, O., Bhatt, P., & Lawong, D. A. (2025). Technology advancements shaping the financial inclusion landscape: Present interventions, emergence of artificial intelligence and future directions. Information Systems Frontiers, 27(5), 2189–2212. [Google Scholar] [CrossRef]
  3. Alzubaidi, L., Al-Sabaawi, A., Bai, J., Dukhan, A., Alkenani, A. H., Al-Asadi, A., Alwzwazy, H. A., Manoufali, M., Fadhel, M. A., Albahri, A. S., Moreira, C., Ouyang, C., Zhang, J., Santamaría, J., Salhi, A., Hollman, F., Gupta, A., Duan, Y., Rabczuk, T., … Gu, Y. (2023). Towards risk-free trustworthy artificial intelligence: Significance and requirements. International Journal of Intelligent Systems, 2023(1), 4459198. [Google Scholar] [CrossRef]
  4. Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71–81. [Google Scholar] [CrossRef]
  5. Bhatnagr, P., & Rajesh, A. (2025). Artificial intelligence features and expectation confirmation theory in digital banking apps: Gen Y and Z perspective. Management Decision, 63(10), 3642–3675. [Google Scholar] [CrossRef]
  6. Bhatnagr, P., Rajesh, A., & Misra, R. (2024). Continuous intention usage of artificial intelligence enabled digital banks: A review of expectation confirmation model. Journal of Enterprise Information Management, 37(6), 1763–1787. [Google Scholar] [CrossRef]
  7. Blut, M., Wang, C., Wünderlich, N. V., & Brock, C. (2021). Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. Journal of the Academy of Marketing Science, 49(4), 632–658. [Google Scholar] [CrossRef]
  8. Brandizzi, N. (2023). Toward more human-like AI communication: A review of emergent communication research. IEEE Access, 11, 142317–142340. [Google Scholar] [CrossRef]
  9. Brislin, R. W. (1970). Back-translation for cross-cultural research. Journal of Cross-Cultural Psychology, 1(3), 185–216. [Google Scholar]
  10. Calhoun, C. S., Bobko, P., Gallimore, J. J., & Lyons, J. B. (2019). Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment. Journal of Trust Research, 9(1), 28–46. [Google Scholar] [CrossRef]
  11. Chae, M.-J., & Kim, M. (2026). Too human to trust? How AI human-likeness and context orientation shape consumer preferences in premium high-tech markets. Journal of Retailing and Consumer Services, 88, 104513. [Google Scholar] [CrossRef]
  12. Chang, W.-L., Chan, C. L., & Hsieh, Y.-H. (2025). Antecedents of user satisfaction and trust in digital banking: An examination of the ISSM factors. International Journal of Bank Marketing, 43(8), 1756–1778. [Google Scholar] [CrossRef]
  13. Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Information Processing & Management, 59(3), 102940. [Google Scholar] [CrossRef]
  14. Chowdhury, S., Budhwar, P., Dey, P. K., Joel-Edgar, S., & Abadie, A. (2022). AI-employee collaboration and business performance: Integrating knowledge-based view, socio-technical systems and organisational socialisation framework. Journal of Business Research, 144, 31–49. [Google Scholar] [CrossRef]
  15. Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), 336–349. [Google Scholar]
  16. Curtis, R. G., Bartel, B., Ferguson, T., Blake, H. T., Northcott, C., Virgara, R., & Maher, C. A. (2021). Improving user experience of virtual health assistants: Scoping review. Journal of Medical Internet Research, 23(12), e31737. [Google Scholar] [CrossRef]
  17. da Costa Filho, M. C. M., & da Costa Hernandez, J. M. (2025). The influence of consumer skepticism toward online reviews in product evaluations. International Journal of Electronic Commerce, 29(4), 528–556. [Google Scholar] [CrossRef]
  18. De Freitas, J., Oğuz-Uğuralp, Z., Uğuralp, A. K., & Puntoni, S. (2025). AI companions reduce loneliness. Journal of Consumer Research, ucaf040. [Google Scholar] [CrossRef]
  19. Dinev, T., & Hart, P. (2006). An extended privacy calculus model for e-commerce transactions. Information Systems Research, 17(1), 61–80. [Google Scholar] [CrossRef]
  20. Donner, J. (2015). After access: Inclusion, development, and a more mobile Internet. MIT Press. [Google Scholar]
  21. Gefen, D., & Straub, D. W. (2004). Consumer trust in B2C e-commerce and the importance of social presence: Experiments in e-products and e-services. Omega, 32(6), 407–424. [Google Scholar] [CrossRef]
  22. Greilich, A., Bremser, K., & Wüst, K. (2025). Consumer response to anthropomorphism of text-based AI chatbots: A systematic literature review and future research directions. International Journal of Consumer Studies, 49(5), e70108. [Google Scholar] [CrossRef]
  23. Grover, V., & Lyytinen, K. (2023). The pursuit of innovative theory in the digital age. Journal of Information Technology, 38(1), 45–59. [Google Scholar] [CrossRef]
  24. Hair, J. F., Jr., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). Partial least squares structural equation modeling (PLS-SEM) using R: A workbook. Springer Nature. [Google Scholar]
  25. Kamara, A. K., & Oppong, E. O. (2025). Mobile FinTech and financial inclusion in sub-Saharan Africa: A comparative analysis. African Journal of Science, Technology, Innovation and Development, 17(7), 1051–1063. [Google Scholar] [CrossRef]
  26. Kandampully, J., Bilgihan, A., & Amer, S. M. (2023). Linking servicescape and experiencescape: Creating a collective focus for the service industry. Journal of Service Management, 34(2), 316–340. [Google Scholar] [CrossRef]
  27. Keating, B. W., Mulcahy, R., Riedel, A., Beatson, A., & Letheren, K. (2025). Designing AI to elicit positive word-of-mouth in service recovery: The role of stress, anthropomorphism, and personal resources. International Journal of Information Management, 84, 102916. [Google Scholar] [CrossRef]
  28. Kebe, I. A., Kahl, C., & Liu, Y. (2024). Charting success: The influence of leadership styles on driving sustainable employee performance in the Sierra Leonean banking sector. Sustainability, 16(21), 9600. [Google Scholar] [CrossRef]
  29. Königstorfer, F., & Thalmann, S. (2020). Applications of Artificial Intelligence in commercial banks—A research agenda for behavioral finance. Journal of Behavioral and Experimental Finance, 27, 100352. [Google Scholar] [CrossRef]
  30. Kuen, L., Westmattelmann, D., Bruckes, M., & Schewe, G. (2023). Who earns trust in online environments? A meta-analysis of trust in technology and trust in provider for technology acceptance. Electronic Markets, 33(1), 61. [Google Scholar] [CrossRef]
  31. Kumar, S., Jain, R., & Sharma, A. (2025). Anthropomorphic artificial intelligence drives consumer behavior: Comprehensive literature review and research agenda. Journal of Internet Commerce, 24(4), 185–223. [Google Scholar] [CrossRef]
  32. Li, J., Wang, N., & Wang, Y. (2025). The double-edged sword effect of generative AI anthropomorphism on users’ emotional attachment: The moderating role of task types. Aslib Journal of Information Management, 1–24. [Google Scholar] [CrossRef]
  33. Li, T., Wang, M., & Wang, F. (2025). Anthropomorphism of artificial intelligence service agent and consumer responses: A systematic literature review and future research agenda. International Journal of Consumer Studies, 49(3), e70066. [Google Scholar] [CrossRef]
  34. Li, Y., Zhou, X., Jiang, X., Fan, F., & Song, B. (2024). How service robots’ human-like appearance impacts consumer trust: A study across diverse cultures and service settings. International Journal of Contemporary Hospitality Management, 36(9), 3151–3167. [Google Scholar] [CrossRef]
  35. Liu, C.-F., Chen, Z.-C., Kuo, S.-C., & Lin, T.-C. (2022). Does AI explainability affect physicians’ intention to use AI? International Journal of Medical Informatics, 168, 104884. [Google Scholar] [CrossRef]
  36. Maduku, D. K. (2016). The effect of institutional trust on internet banking acceptance: Perspectives of South African banking retail customers. South African Journal of Economic and Management Sciences, 19(4), 533–548. [Google Scholar] [CrossRef]
  37. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734. [Google Scholar] [CrossRef]
  38. Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology. ACM Transactions on Management Information Systems, 2(2), 1–25. [Google Scholar] [CrossRef]
  39. Mehrabian, A., & Russell, J. A. (1974). An approach to environmental psychology. The MIT Press. [Google Scholar]
  40. Meikle, N. L., & Bonner, B. L. (2024). Unaware and unaccepting: Human biases and the advent of artificial intelligence. American Psychological Association. [Google Scholar]
  41. Ministry of Finance—Government of Sierra Leone. (2023). Digital financial services in Sierra Leone. Available online: https://mof.gov.sl/wp-content/uploads/2024/11/Digital-Financial-Services-in-Sierra-Leone.pdf (accessed on 7 January 2026).
  42. Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. [Google Scholar] [CrossRef]
  43. North, D. C. (1990). Institutions, institutional change and economic performance. Cambridge University Press. [Google Scholar]
  44. Ofori-Okyere, I., & Edghiem, F. (2026). An exploration of financial services digital encounters in developing countries. Journal of Services Marketing, 1–28. [Google Scholar] [CrossRef]
  45. Okeke, L. (2025). AI-powered chatbots and customer experience in Nigeria’s banking sector: Opportunities and challenges. Nnadiebube Journal of Social Sciences, 6(1), 71–88. [Google Scholar]
  46. Pan, S., Qin, Z., & Zhang, Y. (2024). More realistic, more better? How anthropomorphic images of virtual influencers impact the purchase intentions of consumers. Journal of Theoretical and Applied Electronic Commerce Research, 19(4), 3229–3252. [Google Scholar] [CrossRef]
  47. Peter, S., Riemer, K., & West, J. D. (2025). The benefits and dangers of anthropomorphic conversational agents. Proceedings of the National Academy of Sciences, 122(22), e2415898122. [Google Scholar] [CrossRef]
  48. Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903. [Google Scholar] [CrossRef]
  49. Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63(1), 539–569. [Google Scholar] [CrossRef]
  50. Riedel, A., Mulcahy, R., & Northey, G. (2022). Feeling the love? How consumer’s political ideology shapes responses to AI financial service delivery. International Journal of Bank Marketing, 40(6), 1102–1132. [Google Scholar] [CrossRef]
  51. Schrank, J. (2025). The impact of artificial intelligence on behavioral intentions to use mobile banking in the post-COVID-19 era. Frontiers in Artificial Intelligence, 8, 1649392. [Google Scholar] [CrossRef] [PubMed]
  52. Truong, T. T. H., & Chen, J. S. (2025). When empathy is enhanced by human–AI interaction: An investigation of anthropomorphism and responsiveness on customer experience with AI chatbots. Asia Pacific Journal of Marketing and Logistics, 37(12), 3908–3925. [Google Scholar] [CrossRef]
  53. Tsekouras, D., Gutt, D., & Heimbach, I. (2024). The robo bias in conversational reviews: How the solicitation medium anthropomorphism affects product rating valence and review helpfulness. Journal of the Academy of Marketing Science, 52(6), 1651–1672. [Google Scholar] [CrossRef]
  54. van Norren, D. E. (2023). The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective. Journal of Information, Communication and Ethics in Society, 21(1), 112–128. [Google Scholar] [CrossRef]
  55. Vazquez, E. E., Patel, C., Alvidrez, S., & Siliceo, L. (2023). Images, reviews, and purchase intention on social commerce: The role of mental imagery vividness, cognitive and affective social presence. Journal of Retailing and Consumer Services, 74, 103415. [Google Scholar] [CrossRef]
  56. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view1. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  57. von Kalckreuth, N., & Feufel, M. A. (2023). Extending the privacy calculus to the mHealth domain: Survey study on the intention to use mHealth apps in Germany. JMIR Human Factors, 10, e45503. [Google Scholar] [CrossRef]
  58. von Schenk, A., Klockmann, V., & Köbis, N. (2025). Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science, 20(1), 165–181. [Google Scholar] [CrossRef] [PubMed]
  59. Wakunuma, K., Ogoh, G., Akintoye, S., & Eke, D. O. (2025). Decoloniality as an essential trustworthy AI requirement. In Trustworthy AI (pp. 255–276). Springer Nature Switzerland. [Google Scholar] [CrossRef]
  60. Wang, J., Zhou, Z., Xu, S., Yan, X., Zhang, Y., & Morrison, A. M. (2025). The importance of human touch: How robot anthropomorphism impacts customer engagement in tourism and hospitality. Journal of Vacation Marketing, 13567667251367435. [Google Scholar] [CrossRef]
  61. Wang, Z., Yuan, R., Li, B., Kumar, V., & Kumar, A. (2026). An empirical study of AI financial advisor adoption through technology vulnerabilities in the financial context. Journal of Product Innovation Management, 43(1), 14–30. [Google Scholar] [CrossRef]
  62. Yu, D., Zhao, J., Tang, R., Han, C., & Yang, M. (2025). Unlocking the service attractiveness of AI assistants: Does multi-modal anthropomorphic interaction dynamically manipulate users’ mindset metrics? Journal of Consumer Behaviour, 24(6), 2772–2792. [Google Scholar] [CrossRef]
  63. Zhang, X., Ko, M., & Carpenter, D. (2016). Development of a scale to measure skepticism toward electronic word-of-mouth. Computers in Human Behavior, 56, 198–208. [Google Scholar] [CrossRef]
  64. Zhou, T., & Ma, X. (2025). Examining generative AI user continuance intention based on the SOR model. Aslib Journal of Information Management. [Google Scholar] [CrossRef]
  65. Zucker, L. G. (1986). Production of trust: Institutional sources of economic structure, 1840–1920. Research in Organizational Behavior, 8, 53–111. [Google Scholar]
Figure 1. Conceptual Framework. Note: Solid arrows represent direct hypothesized effects. Dashed arrows represent moderating effects.
Figure 1. Conceptual Framework. Note: Solid arrows represent direct hypothesized effects. Dashed arrows represent moderating effects.
Behavsci 16 00496 g001
Figure 2. Simple slope analysis of (a) SKP’s negative moderation on AIA-PSP relationship, and (b) SKP’s negative moderation of AIA-TAI relationship.
Figure 2. Simple slope analysis of (a) SKP’s negative moderation on AIA-PSP relationship, and (b) SKP’s negative moderation of AIA-TAI relationship.
Behavsci 16 00496 g002
Figure 3. Structural model.
Figure 3. Structural model.
Behavsci 16 00496 g003
Table 1. Respondent Demographics.
Table 1. Respondent Demographics.
CharacteristicCategoryn%
GenderMale11742.2
Female16057.8
Age18–25 years3914.1
26–35 years7025.3
36–45 years12846.2
46+ years4014.4
Education LevelSecondary or below259.0
Diploma/Certificate7326.4
Bachelor’s degree or higher17964.6
Employment StatusEmployed (formal sector)13749.5
Self-employed/Informal8430.3
Unemployed/Student/Others5620.2
Smartphone OwnershipYes277100.0
Bank Type UsedDomestic commercial bank9333.6
Foreign commercial bank21778.3
Prior Experience with
AI Banking Services
Used chatbot/virtual assistant18968.2
Heard of but never used8831.8
RegionFreetown (Western Area)15054.2
Makeni (Northern Province)4415.9
Bo (Southern Province)4516.2
Kenema (Eastern Province)3813.7
Source: Authors’ Data.
Table 2. Reliability and Validity of Constructs.
Table 2. Reliability and Validity of Constructs.
ConstructItemsVIFFLCACRAVE
AI Anthropomorphism 0.8400.8500.608
AIA11.7430.744
AIA21.9400.785
AIA31.7680.761
AIA42.1500.816
AIA51.8540.792
Perceived Social Presence 0.8550.8780.696
PSP11.8570.787
PSP21.8420.791
PSP32.8500.906
PSP41.9440.846
Trust in the AI System 0.8580.8770.636
TAI12.1030.808
TAI22.1620.857
TAI31.7160.721
TAI41.8680.787
TAI51.9870.81
Intention to Adopt AI-Driven Banking Services 0.8990.9040.713
INT12.9740.889
INT22.3270.822
INT32.6400.851
INT42.0760.813
INT52.4010.844
Skepticism 0.7720.7740.688
SKP11.9370.859
SKP21.7700.847
SKP31.3850.780
Source: Authors’ Data.
Table 3. Fornell and Larcker Criterion.
Table 3. Fornell and Larcker Criterion.
ConstructAIAINTPSPSKPTAI
AIA0.780
INT0.4920.844
PSP0.6010.7320.834
SKP0.5730.7120.6340.829
TAI0.5870.7420.7500.6590.798
Source: Authors’ Data. Note: The bold diagonal elements represent the square root of the Average Variance Extracted (AVE) for each construct. Off-diagonal values represent the inter-construct correlations. For adequate discriminant validity, the diagonal values should exceed the corresponding off-diagonal correlations.
Table 4. Heterotrait–Monotrait Ratio.
Table 4. Heterotrait–Monotrait Ratio.
ConstructAIAINTPSPSKPTAI
AIA
INT0.548
PSP0.6870.817
SKP0.7070.8470.749
TAI0.6670.8160.8160.794
Source: Authors’ Data.
Table 5. Model Performance and Effect Size Estimates (f2).
Table 5. Model Performance and Effect Size Estimates (f2).
ConstructR2R2
Adjusted
Q2
Predict
RMSEMAEPathf2Effect Size
Interpretation
INT0.6210.6160.490.7290.568AIA → INT0Negligible
PSP0.6230.6190.5810.6660.499AIA → PSP0.224Medium–Large
TAI0.5150.5090.4970.7150.586AIA → TAI0.135Small–Medium
PSP → INT0.173Medium
TAI → INT0.214Medium–Large
Source: Authors’ Data.
Table 6. Direct Effects.
Table 6. Direct Effects.
HypothesisPathβtp95% CI (Lower, Upper)Supported?
H1AIA → PSP0.3547.804<0.001(0.269, 0.445)Yes
H2AIA → TAI0.3124.743<0.001(0.190, 0.448)Yes
H3AIA → INT−0.0130.3050.760(−0.096, 0.072)No
H4PSP → INT0.4063.755<0.001(0.180, 0.604)Yes
H5TAI → INT0.4455.538<0.001(0.297, 0.611)Yes
Source: Authors’ Data.
Table 7. Mediation Effects.
Table 7. Mediation Effects.
HypothesisMediation PathwaySpecific Indirect Effect (β)Bootstrapped 95% CItpProportion of Total Indirect Effect
H6AIA → PSP → INT0.144[0.062, 0.226]3.4330.00150.9%
H7AIA → TAI → INT0.139[0.068, 0.210]3.889<0.00149.1%
Total Indirect Effect0.283[0.172, 0.401]5.127<0.001100%
Direct Effect (AIA → INT)−0.013[−0.096, 0.072]0.3050.760
Total Effect0.270[0.168, 0.379]4.984<0.001
Source: Authors’ Data.
Table 8. Moderation Effects.
Table 8. Moderation Effects.
HypothesisInteraction PathβtpSupported?
H8SKP × AIA → PSP−0.2685.6570.000Yes
H9SKP × AIA → TAI−0.0903.2310.001Yes
Source: Authors’ Data.
Table 9. Conditional Indirect Effects and Index of Moderated Mediation.
Table 9. Conditional Indirect Effects and Index of Moderated Mediation.
MediatorSkepticism LevelConditional Indirect Effect (β)Bootstrapped 95% CI
PSPLow (−1 SD)0.253[0.162, 0.358]
PSPHigh (+1 SD)0.035[−0.012, 0.098]
Index of moderated mediation−0.109[−0.187, −0.042]
TAILow (−1 SD)0.179[0.112, 0.264]
TAIHigh (+1 SD)0.099[0.041, 0.172]
Index of moderated mediation−0.04[−0.081, 0.003]
Source: Authors’ Data.
Table 10. Comparison of Key Path Coefficients: Models With vs. Without Control Variables.
Table 10. Comparison of Key Path Coefficients: Models With vs. Without Control Variables.
PathFull Model
(With Controls)
Reduced Model
(Without Controls)
Δβp-Value (Difference)
AIA → PSP0.354 ***0.361 ***0.0070.682
AIA → TAI0.312 ***0.325 ***0.0130.541
AIA → INT−0.013−0.0090.0040.891
PSP → INT0.406 ***0.398 ***−0.0080.723
TAI → INT0.445 ***0.451 ***0.0060.814
SKP × AIA → PSP−0.268 ***−0.274 ***−0.0060.765
SKP × AIA → TAI−0.090 ***−0.095 ***−0.0050.832
R2 (INT)0.6210.618−0.003
R2 (PSP)0.6230.615−0.008
R2 (TAI)0.5150.509−0.006
Note: *** p < 0.001. Δβ = absolute difference in standardized coefficients between models. p-values for differences derived from 5000 bootstrap resamples comparing coefficient distributions.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mackay, A.C.D.; Zuo, L.; Kebe, I.A. Anthropomorphic AI and Consumer Skepticism: A Behavioral Study of Trust and Adoption in Fragile Economies. Behav. Sci. 2026, 16, 496. https://doi.org/10.3390/bs16040496

AMA Style

Mackay ACD, Zuo L, Kebe IA. Anthropomorphic AI and Consumer Skepticism: A Behavioral Study of Trust and Adoption in Fragile Economies. Behavioral Sciences. 2026; 16(4):496. https://doi.org/10.3390/bs16040496

Chicago/Turabian Style

Mackay, Agnes Caroline Dontina, Li Zuo, and Ibrahim Alusine Kebe. 2026. "Anthropomorphic AI and Consumer Skepticism: A Behavioral Study of Trust and Adoption in Fragile Economies" Behavioral Sciences 16, no. 4: 496. https://doi.org/10.3390/bs16040496

APA Style

Mackay, A. C. D., Zuo, L., & Kebe, I. A. (2026). Anthropomorphic AI and Consumer Skepticism: A Behavioral Study of Trust and Adoption in Fragile Economies. Behavioral Sciences, 16(4), 496. https://doi.org/10.3390/bs16040496

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop