Next Article in Journal
Digital Technology Integration in Risk Management of Human–Robot Collaboration Within Intelligent Construction—A Systematic Review and Future Research Directions
Previous Article in Journal
Key Project Management Competencies in the Last Two Decades: A Bibliometric and Content Analysis
Previous Article in Special Issue
How Does Digital Technology Innovation Quality Empower Corporate ESG Performance? The Roles of Digital Transformation and Digital Technology Diffusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Perceived Value Drives Usage Intention of AI Digital Human Advisors in Digital Finance

Graduate School of Management of Technology, Pukyong National University, Busan 48547, Republic of Korea
*
Author to whom correspondence should be addressed.
Systems 2025, 13(11), 973; https://doi.org/10.3390/systems13110973
Submission received: 29 September 2025 / Revised: 25 October 2025 / Accepted: 28 October 2025 / Published: 31 October 2025
(This article belongs to the Special Issue Innovation Management and Digitalization of Business Models)

Abstract

This study investigates how perceived value influences user satisfaction and usage intention toward AI Digital Human Advisors in digital finance, drawing on the Stimulus–Organism–Response (S–O–R) framework. Perceived value is conceptualized as comprising functional, cognitive, and emotional dimensions, reflecting users’ utilitarian, intellectual, and affective evaluations of AI advisors. To empirically test the proposed model, a structured questionnaire survey was conducted with 524 adult users of digital financial applications in mainland China, and the data were analyzed using structural equation modeling (SEM). The results reveal that cognitive and emotional value significantly enhance both satisfaction and usage intention, whereas functional value shows no significant effect. Satisfaction fully mediates the effect of cognitive value and partially mediates that of emotional value. Moreover, switching barriers negatively moderate the satisfaction–intention link, indicating that high friction weakens the behavioral impact of satisfaction. The findings extend perceived value theory to AI-mediated financial contexts by demonstrating that emotional and cognitive engagement—rather than functional efficiency—drives sustained behavioral intention. Practically, the study highlights the importance of designing emotionally intelligent and cognitively transparent AI advisors. As the data were collected from urban users in China, where digital finance is relatively advanced, future research should validate these findings in other cultural and institutional contexts.

1. Introduction

Artificial intelligence (AI) is reshaping the financial services industry by enhancing efficiency, reducing costs, and enabling personalized user experiences. Across banking, securities, and insurance, AI applications are increasingly embedded in service processes to support decision-making and to improve operational performance. This rapid transformation has made AI not only a technological advancement but also a critical driver of innovation in digital finance, raising important questions about how users interact with and adopt AI-based services.
Within this wave of digital transformation, AI Digital Human Advisors have emerged as a salient class of intelligent service agents. Equipped with voice recognition, natural language processing, and real-time affective interaction, these AI-powered virtual advisors blend task-oriented functionality with socially expressive cues, thereby enriching user experience through both instrumental efficiency and relational engagement. They are now increasingly deployed across banking, securities, and insurance to expand service accessibility, foster trust, and sustain user engagement. Yet, despite this visible diffusion, public understanding and acceptance remain limited, suggesting that adoption cannot be explained by technical performance alone. Most existing research has concentrated on system design and service performance [1], leaving the psychological mechanisms that shape user acceptance underexplored. In particular, it remains unclear how the perceived value users form in interactions with AI Digital Human Advisors unfolds across functional, cognitive, and emotional dimensions and, in turn, translates into sustained usage intention. Although prior work on AI-based services (e.g., robo-advisors and chatbots) has examined constructs such as perceived usefulness, effectiveness, and efficiency, these studies typically foreground utilitarian outcomes and offer only limited insight into the cognitive appraisals (e.g., perceived expertise, clarity) and emotional responses (e.g., enjoyment, warmth) that underlie engagement. Few investigations have explicitly analyzed how users jointly evaluate anthropomorphic, socially interactive advisors that simultaneously deliver decision support and simulate humanlike communication in financial contexts. This incomplete account of how multidimensional value perceptions map onto satisfaction and behavioral intention constitutes a substantive theoretical gap that the present study addresses.
To address this gap, the present study focuses on the following main research question: How do users’ multidimensional perceptions of value toward AI Digital Human Advisors influence their satisfaction and subsequent usage intention in digital finance? This question is theoretically important because it connects perceived value theory with the Stimulus–Organism–Response (S–O–R) framework, thereby explaining the psychological mechanisms underlying continued adoption of AI-mediated financial services. Practically, it responds to the growing demand for user-centered design principles in intelligent financial platforms, where emotional and cognitive engagement increasingly determine sustainable user behavior. To empirically test the proposed model, this study adopts a quantitative survey design combined with structural equation modeling (SEM), a method well suited to assessing multiple latent constructs, mediating effects, and moderating relationships in behavioral research. This gap highlights the need for a more comprehensive theoretical lens. Anchored in the Stimulus–Organism–Response (S–O–R) framework [2], the present study draws on perceived value theory to examine how users’ functional, cognitive, and emotional evaluations of AI Digital Human Advisors shape their satisfaction and subsequent behavioral intention. By focusing on these multiple dimensions of value, the study addresses limitations of prior work that has overemphasized technical and utilitarian aspects while overlooking the roles of cognition and emotion in financial advisory contexts. Moreover, this study introduces switching barriers as a critical boundary condition. Switching barriers capture the inertia and cost-related considerations that can weaken the link between satisfaction and continued usage. Although switching barriers have been extensively studied in traditional service environments [3], they remain underexamined in the context of AI-driven financial interactions. Incorporating this construct refines the S–O–R model and provides a more nuanced understanding of when satisfaction does—or does not—translate into usage intention.
Ultimately, this study aims to (1) empirically validate the relationship between perceived value and user satisfaction, (2) examine how satisfaction drives usage intention, and (3) test the moderating role of switching barriers. Accordingly, this study addresses the following research questions: RQ1: How do the functional, cognitive, and emotional dimensions of perceived value influence user satisfaction with AI Digital Human Advisors? RQ2: How does user satisfaction mediate the relationship between perceived value and continued usage intention? RQ3: To what extent do switching barriers moderate the relationship between satisfaction and usage intention? By answering these questions, this research seeks to provide both theoretical and managerial insights into the psychological mechanisms driving AI-mediated financial service adoption in the Chinese context. In doing so, the research contributes theoretical insights that advance digital finance scholarship and offers practical guidance for the strategic deployment of AI Digital Human Advisors to support intelligent service delivery. The remainder of this paper is structured as follows. Section 2 presents the literature review and develops the research hypotheses. Section 3 describes the research design, measurement development, and data collection procedures. Section 4 reports the empirical results, including confirmatory factor analysis, structural model testing, and moderation analysis. Section 5 discusses the theoretical and practical implications, research limitations, and directions for future studies.

2. Literature Review and Hypothesis Development

2.1. AI Digital Human Advisors in Financial Services

AI digital human advisors are increasingly used in financial services to provide interactive, real-time consulting and advisory support through technologies such as natural language processing, virtual avatar modeling, and affective computing. These agents simulate human advisors in both appearance and behavior, enabling service providers to scale client interactions while enhancing perceived professionalism and responsiveness [4]. Unlike conventional chatbots, AI digital humans facilitate multimodal, emotionally intelligent interactions, fostering higher levels of user engagement and service personalization. This is consistent with recent findings in robo-advisory research, which highlight the critical role of perceived value in shaping AI-based service engagement [5,6].
From a value-oriented perspective, these systems deliver significant functional benefits, such as convenience, operational efficiency, and innovative service experiences. For example, users often perceive digital advisors as capable of presenting comprehensive financial scenarios and enabling efficient information acquisition through user-friendly interfaces—features that closely align with functional value [7]. In addition to functional utility, AI digital human advisors also enhance users’ cognitive value by providing accurate, objective, and contextually rich financial information. These advisors serve as informational support systems that help users better understand complex financial products and services [8]. Through dialog-based interaction and intelligent feedback, they offer users novel ways of accessing and processing financial knowledge.
Moreover, the emotional dimension of digital human advisors has emerged as a key driver of user experience. Emotional design elements—such as lifelike appearance, natural expressions, and empathic communication—contribute to more enjoyable and esthetically pleasing user interactions [9]. These emotionally engaging features improve user satisfaction and strengthen perceptions of digital human advisors as trustworthy, competent, and socially present—attributes that are essential for sustained engagement in digital financial services [10,11]. The adoption of such technologies aligns with the broader transformation of financial services toward digital inclusivity. AI digital human advisors facilitate this shift by offering scalable, low-cost, and around-the-clock access to financial consultation, often outperforming human counterparts in terms of consistency and availability.
Taken together, AI digital human advisors serve not only as technological tools but also as relational agents that fulfill users’ functional, cognitive, and emotional needs. As such, they are well-positioned to influence satisfaction and long-term usage intention, especially in the evolving landscape of digital financial ecosystems. Recent cross-national evidence also supports the growing role of artificial intelligence in banking and financial services, indicating that users’ trust, perceived usefulness, and risk perceptions significantly influence the adoption of AI-based systems in financial contexts [12]. At the macroeconomic level, prior studies have examined digital finance primarily through asset-level analyses. For example, recent research has explored the dynamic relationships among traditional and digital assets, identifying cryptocurrencies as emerging safe-haven instruments during inflationary periods [13]. While such work provides valuable macro-level insights into digital asset behavior, the present study shifts the analytical focus to the micro-level, exploring how users’ perceived functional, cognitive, and emotional value toward AI Digital Human Advisors shapes satisfaction and behavioral intention. This user-centered perspective complements prior macroeconomic investigations by uncovering the psychological mechanisms that drive engagement with AI-mediated financial services.

2.2. Perceived Value and Its Impact on Satisfaction

Perceived value is a foundational concept in consumer behavior research, widely recognized as a critical determinant of customer attitudes and behavioral outcomes. It refers to the user’s overall evaluation of the benefits received relative to the costs incurred in acquiring and using a product or service [14]. Going beyond economic exchange, perceived value encompasses a spectrum of functional, cognitive, emotional, and experiential judgments that guide user decisions [15,16]. Recent studies have extended this framework to digital financial environments, validating perceived value dimensions in AI-mediated interactions [17,18]. Supporting this view, recent research on AI adoption in financial services demonstrates that users’ perceptions of usefulness, reliability, and technological readiness critically shape their acceptance of AI-enabled systems [19]. In the context of AI digital human advisors, perceived value has evolved into a multidimensional construct, particularly relevant in technology-mediated service environments. This study conceptualizes perceived value as comprising functional, cognitive, and emotional dimensions, building on prior work [8]. Functional value refers to the practical utility, innovation, and effectiveness of the service offering, such as the ability of AI advisors to deliver scenario-based guidance or intuitive user interfaces [7]. Cognitive value represents the informational and knowledge-enhancing aspects of user interaction—such as helping users understand complex financial products more effectively [8]. This interpretation is consistent with recent theoretical perspectives on AI-enabled innovations in the financial sector, which highlight that users’ cognitive understanding and knowledge-based trust are fundamental to the acceptance of intelligent systems [20]. Emotional value reflects the degree of enjoyment, interest, and esthetic satisfaction elicited during interaction with the advisor, often shaped by its anthropomorphic features and emotionally responsive behavior [9]. Recent evidence in digital finance shows that AI-based personalization enhances users’ perceived trust and emotional attachment, thereby reinforcing both cognitive and emotional value in their overall service experience [21].
Satisfaction, in turn, is broadly defined as a user’s affective response to the evaluation of a service relative to expectations [22]. It serves as a key outcome of value perception, particularly in AI-mediated services where both instrumental performance and emotional engagement contribute to overall user appraisal. Prior empirical studies have consistently shown that higher levels of perceived functional, cognitive, and emotional value are associated with greater user satisfaction [23,24,25]. In intelligent systems, satisfaction also depends on the advisor’s perceived intelligence, clarity, and responsiveness. For example, the importance of affective evaluation in forming satisfaction judgments has been emphasized, which is especially relevant in digital environments characterized by limited human contact [26].
Accordingly, this study proposes that each dimension of perceived value contributes positively to user satisfaction when engaging with AI digital human advisors.
H1: 
The functional value of AI digital human advisors positively influences user satisfaction.
H2: 
The cognitive value of AI digital human advisors positively influences user satisfaction.
H3: 
The emotional value of AI digital human advisors positively influences user satisfaction.
Perceived value theory provides a comprehensive foundation for distinguishing the three dimensions of user evaluation toward AI Digital Human Advisors. Functional value reflects utilitarian benefits and innovation efficiency; cognitive value captures rational assessments of informational clarity, credibility, and accuracy; and emotional value concerns affective responses such as enjoyment, warmth, and human-likeness [7,9,14]. Within the Stimulus–Organism–Response (S–O–R) framework [2], these three value perceptions act as stimuli that generate the organismic state of satisfaction, which in turn leads to the response of usage intention [27]. Switching barriers function as a contextual moderator that dampens the translation of satisfaction into behavioral intention, reflecting psychological or procedural friction in AI-mediated service use.

2.3. Usage Intention in Intelligent Financial Services

Usage intention refers to an individual’s conscious plan to engage with a product or service in the future, assuming the freedom of choice. In the context of AI-enabled services, it reflects the user’s motivation to continue using intelligent systems, such as AI digital human advisors, for decision support and financial interactions. Rooted in the Theory of Planned Behavior [28] and extended by the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) [29], usage intention is shaped by both rational assessments and affective responses. Key antecedents include perceived usefulness, habit, hedonic motivation, and social influence.
Recent studies in digitally mediated financial services emphasize the importance of experiential value in shaping behavioral intention. For example, emotional resonance and parasocial connections with non-human agents—such as influencers or virtual advisors—have been shown to enhance behavioral commitment, even in the absence of real interpersonal contact [30]. In live-streaming environments, factors such as entertainment, flow, and social interactivity are identified as key drivers of continued use [31]. These findings are relevant for AI digital human advisors, whose anthropomorphic design and affective feedback mechanisms are designed to mimic human engagement and foster long-term user relationships. Recent empirical evidence of AI financial advisor adoption further reveals that users’ perceptions of technology vulnerability, trust, and perceived risk significantly mediate the effect of value assessments on continued intention [32].
From a value-based perspective, users are more likely to adopt and continue using AI digital human advisors when they perceive the service as functionally useful, cognitively enriching, and emotionally rewarding. Functional value enhances perceptions of service reliability and operational efficiency; cognitive value fosters informativeness and learning; and emotional value amplifies enjoyment and visual appeal. Together, these dimensions shape usage intention in ways that extend beyond purely utilitarian evaluation. This study adopts a usage intention scale adapted from [33], which has been validated across various digital service contexts. The scale reflects users’ willingness to continue using, recommend, and proactively engage with AI-based advisory systems. The following hypotheses are proposed:
H4: 
The functional value of AI digital human advisors directly and positively influences users’ usage intention.
H5: 
The cognitive value of AI digital human advisors directly and positively influences users’ usage intention.
H6: 
The emotional value of AI digital human advisors directly and positively influences users’ usage intention.

2.4. Satisfaction and Its Relationship with Usage Intention

Satisfaction is a well-established concept in marketing and service literature, defined as a user’s overall affective evaluation of a product or service based on the extent to which expectations are met or exceeded [22]. Originally developed in consumer behavior research [34], the satisfaction construct has since been widely applied in the evaluation of digital interfaces, intelligent systems, and AI-based services. The Expectation-Disconfirmation Model [22] remains a cornerstone theory in this area, positing that satisfaction arises when perceived performance aligns with or exceeds prior expectations. In service quality research, satisfaction is often viewed as a mediating force between perceived value and behavioral outcomes such as loyalty, repurchase, and advocacy [35]. In digital service contexts, the evaluation of system usability, clarity, and responsiveness plays a critical role in shaping satisfaction [36,37]. Perceived risk and emotional disconnect can erode satisfaction, even when systems are functionally sound, as further emphasized in previous research [38].
In the context of AI digital human advisors, satisfaction arises from the system’s ability to meet users’ functional, emotional, and informational needs during interaction. When users perceive the advisor as knowledgeable, empathic, and responsive, they are more likely to evaluate the experience positively. Recent findings confirm that AI-specific features—such as emotional expressiveness and smooth interaction flow—significantly contribute to user satisfaction [39,40]. These findings are echoed in more recent work focusing on AI governance and ethical interaction design in digital finance [41]. Recent research on generative AI has further shown that users’ cognitive trust—shaped by perceived transparency, competence, and explainability—plays a decisive role in sustaining engagement with intelligent systems [42]. One of the most widely accepted operationalizations of satisfaction frames it as a holistic evaluation shaped by both cognitive and emotional assessments, as proposed in prior work [26]. This study adopts their framework to capture users’ overall appraisal of AI digital human advisors in financial services.
Satisfaction is expected to be a strong predictor of usage intention. Higher satisfaction levels increase the likelihood of continued usage and service recommendations, as shown in prior research [33]. In AI-mediated environments—where user experience is influenced not only by performance but also by social presence and trust—satisfaction is essential to maintaining long-term engagement. Therefore, the following hypothesis is proposed:
H7: 
User satisfaction with AI Digital Human Advisors positively influences their usage intention.

2.5. Mediating Role of Satisfaction Between Perceived Value and Usage Intention

Satisfaction has long been recognized as a key mediating construct linking users’ evaluative judgments to subsequent behavioral intentions. To provide a comprehensive explanation of how user-perceived value influences behavioral intention, this study adopts a partial mediation model. Specifically, it assumes that functional, cognitive, and emotional value exert both direct and indirect effects on usage intention. The indirect path operates through user satisfaction, which acts as a psychological mechanism translating perceived benefits into continued engagement. This approach aligns with prior research in service marketing and information systems [23,35], which emphasizes that value perceptions simultaneously shape attitudes and intentions through both affective and cognitive routes. Accordingly, the model incorporates both direct paths from perceived value to usage intention (H4–H6) and mediated paths via satisfaction (H8–H10).
In service and digital platform research, perceived value—comprising functional, cognitive, and emotional dimensions—has been shown to influence usage intention both directly and indirectly through satisfaction [14,43]. Rather than acting as an isolated outcome, satisfaction operates as a psychological mechanism that converts perceived benefits into long-term user commitment. Prior empirical studies in digital services and e-commerce have demonstrated the mediating role of satisfaction. Recent empirical research further supports this mechanism, demonstrating that satisfaction fully mediates the relationship between perceived value and behavioral intention in digital platform contexts [44]. For example, an analysis of the Swedish Customer Satisfaction Index found that customer value influences loyalty primarily through satisfaction [43]. Similarly, satisfaction has been emphasized as mediating the effects of perceived quality and fairness on behavioral intentions [14]. The sequential relationship among perceived value, satisfaction, and loyalty intention in an online services context has been confirmed, underscoring satisfaction’s central role in shaping digital behavior [44].
Given the anthropomorphic and interactive nature of AI digital human advisors, the value-satisfaction-intention mechanism is particularly relevant in this context. The value dimensions—functional utility, cognitive support, and emotional engagement—serve as key precursors to user satisfaction (as outlined in Section 2.2). When these value perceptions are positive, users are more likely to evaluate their interaction with the system favorably.
This study adopts the satisfaction framework to capture users’ holistic appraisal of their service experience, as developed in prior work [26]. In turn, satisfaction is expected to positively influence usage intention, as supported by prior research showing that perceived satisfaction is a strong predictor of users’ future engagement and advocacy behaviors in digital environments [33]. Accordingly, the following hypotheses are proposed:
H8: 
Satisfaction mediates the relationship between functional value and users’ usage intention.
H9: 
Satisfaction mediates the relationship between cognitive value and users’ usage intention.
H10: 
Satisfaction mediates the relationship between emotional value and users’ usage intention.

2.6. The Moderating Role of Switching Barriers Between Satisfaction and Usage Intention

Switching barriers have been extensively studied as key moderators of post-adoption behavior in service and technology contexts. These barriers refer to users’ perceived costs, resistance, or constraints associated with discontinuing a service and transitioning to alternatives, even when satisfaction with the current service is suboptimal [45]. They can arise from various sources, including economic constraints, habitual inertia, psychological discomfort, and a lack of appealing substitutes, all of which discourage switching behavior [46]. In traditional service domains, high switching barriers have been shown to attenuate the influence of satisfaction on behavioral outcomes, a phenomenon known as “passive loyalty” [45]. Recent evidence from the service industry also confirms this moderating mechanism, showing that switching barriers significantly weaken the satisfaction–intention relationship in the health and fitness club sector [47]. For instance, customers in the telecommunications industry were found to remain with their service providers despite dissatisfaction due to anticipated time, effort, or financial costs involved in switching [48]. Habit has been conceptualized as a form of behavioral inertia that limits responsiveness to new options, especially when users are familiar with the current system [49]. Similarly, prior usage patterns have been shown to increase the cognitive effort required for transition, thus moderating switching intentions [50].
In AI-mediated financial services, switching barriers are particularly relevant due to the shift from conventional human-based interaction. Users may face switching costs when adjusting to new system interfaces and interaction logic. They may also experience habitual disruption, as they resist transitioning away from familiar human advisors. Additionally, the perceived lack of superior alternatives—especially in terms of emotional or interpersonal connections—can reduce the appeal of switching. This study adopts a framework that conceptualizes switching barriers as comprising three dimensions: Switching Costs, Alternative Attractiveness, and Habit Strength. For example, switching barriers in traditional service industries have been evaluated from the perspectives of interpersonal relationships, switching costs, and alternative attractiveness [46]. Habit has further been incorporated as a key dimension and operationalized through item sets addressing switching costs (13 items), alternative attractiveness (3 items), and habit strength (10 items) [51]. Drawing from these models, this study applies this tripartite structure to understand the multidimensional nature of switching barriers in AI-based financial services.
In this study, switching barriers are conceptualized as a multidimensional construct comprising switching costs, alternative attractiveness, and habit. These dimensions collectively represent users’ perceived resistance to change in AI-mediated financial services. When switching costs are high, users anticipate losses of time, effort, or accumulated experience; when alternative attractiveness is low, they perceive limited benefits from changing platforms; and when habitual patterns are strong, they display inertia that discourages behavioral change. Consequently, even satisfied users may hesitate to continue engaging with AI Digital Human Advisors when these barriers are salient. Therefore, switching barriers are theorized to negatively moderate the satisfaction–usage intention relationship, weakening the extent to which satisfaction translates into behavioral commitment. This conceptualization aligns with prior findings in service marketing and digital adoption research, where similar frictional mechanisms reduce the behavioral impact of satisfaction [46,47,48]. From a behavioral standpoint, switching barriers are expected to negatively moderate the relationship between satisfaction and usage intention—that is, while satisfaction generally promotes usage intention, this positive effect weakens when users perceive high switching barriers, rather than reversing its direction. Specifically, when switching barriers are high, even satisfied users may hesitate to deepen their commitment due to perceived risks or friction. Conversely, when switching barriers are low, satisfaction plays a more decisive role in shaping usage intention. Accordingly, the following hypothesis is proposed:
H11: 
Switching barriers negatively moderate the relationship between user satisfaction and usage intention.

2.7. Integration of Switching Barriers into the S–O–R Framework

Despite the growing body of research on AI-enabled financial services, few studies have systematically explained how users’ multidimensional value perceptions toward AI Digital Human Advisors jointly shape satisfaction and continued usage intention. Prior models have predominantly emphasized functional or utilitarian evaluations, neglecting cognitive and emotional responses elicited by anthropomorphic and socially interactive AI systems. In addition, the boundary conditions under which satisfaction translates into behavioral intention—particularly the influence of switching barriers—remain insufficiently explored. These underexamined areas constitute the core research gaps that the present study addresses through an integrated S–O–R framework. The Stimulus–Organism–Response (S–O–R) model provides a robust theoretical foundation for understanding how environmental stimuli influence internal psychological states and ultimately shape behavioral responses, and it was originally developed in prior work [2]. Within this framework, a stimulus (S) refers to external inputs such as interface design, sensory cues, or interpersonal interactions; the organism (O) encompasses users’ cognitive and emotional reactions; and the response (R) represents behavioral outcomes, such as intention, loyalty, or action. The model has been widely applied in environmental psychology, consumer behavior, and human–computer interaction to explain user decision-making in mediated service environments.
Recent empirical studies have extended the S–O–R model to digital technology contexts, demonstrating its utility in capturing the dynamics of online interaction. For instance, the model has been applied in e-commerce, showing that platform reputation and website quality (S) influence consumers’ emotional and cognitive states (O), which in turn predict purchase intention (R) [52]. Their findings underscore the mediating function of affect in translating external stimuli into behavioral intention. In a similar vein, the model has been validated by identifying informativeness, entertainment value, and content relevance (S) as significant drivers of user satisfaction (O), which subsequently impacted consumers’ online purchasing behaviors (R) [53]. Collectively, these studies reinforce the explanatory power of the S–O–R paradigm in modeling digital consumer responses shaped by both cognition and emotion.
In the domain of AI-powered financial services, the S–O–R framework is especially well-suited for modeling the influence of interactive system features on user engagement. AI digital human advisors exhibit a range of socially expressive cues—such as anthropomorphic avatars, affective feedback, and personalized communication—that serve as stimuli (S). These features evoke organismic responses (O), which are conceptualized in this study as users’ perceived functional, cognitive, and emotional value, as well as satisfaction. These internal evaluations subsequently lead to responses (R), including behavioral intentions to continue using the service, recommend it, or increase engagement.
To enhance the model’s explanatory precision, this study incorporates Switching Barriers as a moderating construct in the satisfaction–intention pathway. These barriers comprise process-related constraints (e.g., time, effort, learning curve), habitual reliance on traditional advisors, and perceptions of unattractive alternatives. Such barriers may suppress or alter the strength of the satisfaction–intention linkage by introducing psychological friction or resistance to behavioral change. This is particularly salient in transitions from human-mediated to AI-mediated financial services, where users accustomed to personal interaction may experience hesitation—even when satisfied with AI systems.
Within this framework, users’ perceptions of functional, cognitive, and emotional value toward AI Digital Human Advisors operate as external stimuli (S) that evoke the internal psychological state of satisfaction (O), which in turn shapes their behavioral response (R) in the form of continued usage intention. Satisfaction thus functions as a mediating mechanism that translates perceived value into behavioral commitment. Specifically, functional value represents utilitarian and performance-related evaluations, cognitive value reflects informational clarity and perceived intelligence, and emotional value captures affective engagement and anthropomorphic warmth. Together, these dimensions explain how users form holistic judgments of AI advisors that ultimately drive their intention to continue using such services. As shown in Figure 1, this study proposes a comprehensive conceptual framework grounded in the Stimulus–Organism–Response (S–O–R) paradigm. The model integrates the three perceived value dimensions (as introduced in Section 2.2) as stimuli, user satisfaction as the organismic state, and usage intention as the behavioral response. In addition, switching barriers are introduced as a moderator between satisfaction and usage intention, reflecting users’ psychological and process-related resistance to changing service channels. The conceptual model positions the previously defined three value dimensions as independent variables, satisfaction as the mediating variable, and usage intention as the dependent variable. Switching barriers serve as a moderating construct, shaping the strength of the satisfaction–intention relationship. Together, these constructs form a moderated mediation framework that captures both direct and conditional mechanisms influencing user behavior in AI-mediated financial services.

3. Methods and Data

3.1. Survey Design and Data Collection

To systematically investigate the factors influencing users’ intention to use AI digital human advisors in digital financial communication, this study draws on Perceived Value Theory and the Stimulus–Organism–Response (S–O–R) framework to design a structured, questionnaire-based survey. The study examines how different dimensions of perceived value relate to user satisfaction and, in turn, influence usage intention. Additionally, it explores the mediating role of satisfaction in this process and tests the moderating effect of switching barriers on the satisfaction–intention relationship.
This research adopts a quantitative empirical approach, using structural equation modeling (SEM) as the primary analytical method. The questionnaire includes 34 measurement items, all rated on a seven-point Likert scale (1 = “strongly disagree,” 7 = “strongly agree”), ensuring both measurement sensitivity and robustness in statistical analysis. Participants were recruited from the general adult population in mainland China, aged 18 and older, with prior experience using financial apps. Respondents were required to be familiar with or have interacted with AI digital human advisor services to ensure the relevance and validity of their responses. The survey was distributed through the Wenjuanxing platform and disseminated via social media channels (e.g., WeChat, Weibo, Xiaohongshu), AI-related interest groups, university societies, and online forums. To improve data quality and reduce potential bias, the questionnaire included screening items and reverse-coded questions. Demographic information—such as gender, age, frequency of use, education, occupation, monthly income, and city tier—was collected as control variables. The sampling strategy was designed to reflect the typical profile of AI technology users in China, ensuring high representativeness and contextual relevance [54]. The questionnaire is provided in Supplementary S3 of the Supplementary Materials.
The present study employed a cross-sectional survey design rather than a longitudinal or macro-quantitative approach for both theoretical and practical reasons. First, AI Digital Human Advisors remain at an early diffusion stage in the financial sector, and longitudinal pre- and post-implementation data are not yet systematically available. Second, the study’s objective is to uncover the psychological mechanisms—how users’ perceived value influences satisfaction and intention—rather than to measure market-level fluctuations in service demand. Third, the Stimulus–Organism–Response (S–O–R) framework focuses on internal cognitive and affective evaluations that are best captured through perceptual measures within a single time frame. Fourth, Structural Equation Modeling (SEM) was chosen because it allows simultaneous estimation of multiple latent constructs, mediating effects, and moderating relationships with high statistical efficiency. SEM has been widely recognized as an appropriate analytical method for theory testing and validation in behavioral and service research [55]. In this study, SEM was applied to test both direct and indirect relationships, complementing the statistical justification described in Section 3.2. Future studies may extend this work by employing longitudinal or mixed-method designs to trace temporal dynamics and behavioral data as AI-based advisory systems mature.

3.2. Scale Development and Pretest Validation

To ensure the psychometric validity of the measurement scales, a pretest survey was conducted prior to the main data collection. A structured questionnaire was developed based on the proposed conceptual framework and adapted from established instruments [7,8,9,26,33,46]. A total of 157 responses were collected, and after excluding incomplete or careless entries (e.g., patterned responses, failed attention checks), 134 valid responses were retained for analysis, yielding an effective response rate of 85.35%.
Reliability tests indicated high internal consistency, with Cronbach’s alpha values exceeding 0.80 for all constructs. Exploratory factor analysis (EFA) was performed using principal component extraction and Varimax rotation. The data demonstrated strong sampling adequacy (KMO = 0.875) and passed Bartlett’s test of sphericity (χ2 = 2434.439, p < 0.001), supporting the factorability of the data. Eight factors with eigenvalues greater than one were extracted, accounting for 73.30% of the total variance. This was further confirmed in the second round of EFA following item reduction. Based on item-total correlations and factor loading diagnostics, five items were removed: FV3 (low CITC), CV5 and DS2 (cross-loading), EV3 (insufficient loading), and LB5 (low item reliability). To ensure the robustness of the empirical results, the research design was conducted in two consecutive stages. The pretest stage (n = 134) was implemented to refine the questionnaire items and assess the reliability and validity of the measurement scales through exploratory factor analysis. The results confirmed satisfactory internal consistency (Cronbach’s α > 0.80) and strong sampling adequacy (KMO = 0.875, p < 0.001). Building upon these validated scales, the main data collection stage was subsequently conducted, yielding 524 valid responses from users of digital financial applications in mainland China. This sample size fully meets the recommended minimum thresholds for structural equation modeling—at least 200 cases or a 10:1 ratio of sample size to estimated parameters [55]—thus ensuring sufficient statistical power and model stability. 524 valid samples also show Cronbach’s alpha values exceeding 0.8, and were also satisfied with the criteria of KMO and Bartlett’s test (Refer to Tables S1 and S2 of the Supplementary S1). These methodological refinements enhance the credibility and generalizability of the study’s findings.

3.3. Formal Survey Sample Profile and Construct Overview

Following the scale development and pilot testing, the study proceeded to the formal survey stage. The finalized instrument included 29 items, each demonstrating satisfactory psychometric properties, and formed the basis for the formal data collection. The complete list of validated items is provided in Table A1 of the Appendix A. The formal survey was administered between late June and early July 2025 via the professional online platform Wenjuanxing. Participants were recruited based on the criterion of having prior experience with AI digital human advisor services. This ensured the relevance and contextual accuracy of the responses.
A total of 524 valid responses were obtained (92.42% response rate), yielding a demographically diverse sample suitable for analyzing user behavior in AI-mediated financial services. Females accounted for 56.87% of respondents, and most participants were between 30 and 49 years old, reflecting the core demographic segment of digital finance users in China. Education levels were relatively high, with more than two-thirds holding a bachelor’s degree or above, and income distribution centered around the 3001–5000 RMB range. Most participants reported moderate to high usage frequency of AI financial applications. Table 1 summarizes the demographic profile of respondents, which corresponds to the typical user base targeted by digital human advisor systems. The sample exhibits balanced gender composition and broad variation in age, income, education, and city tier, consistent with national statistics on China’s digital-finance market [54]. Accordingly, the dataset can be regarded as highly representative of urban digital-finance users.

3.4. Distributional Features and Group Differences in Core Constructs

All core constructs—functional value, cognitive value, emotional value, satisfaction, usage intention, and switching barriers—demonstrated acceptable levels of skewness and kurtosis, indicating a satisfactory approximation to normality [56] (Refer to Table A2). Initial group comparisons revealed that gender had a modest impact: female users reported significantly higher levels of cognitive value, emotional value, and usage intention (Refer to Table A3). Age differences were marginal, with only cognitive value varying across age brackets, favoring users in their 30 s (Refer to Table A4). Education level showed no systematic influence on any construct (Refer to Table A5). These findings suggest that users’ perceptions of AI digital human advisors are largely consistent across sociodemographic strata, reinforcing the structural validity of the proposed model.

4. Results

4.1. Inter-Construct Correlations and Theoretical Alignment

Pearson correlation analysis revealed significant and theoretically coherent relationships among the main constructs (Refer to Table S3 of the Supplementary S1). Emotional value exhibited the strongest correlation with satisfaction (r = 0.495, p < 0.01), followed by cognitive value, while functional value showed the weakest associations. Usage intention was significantly associated with both emotional value (r = 0.470, p < 0.01) and satisfaction (r = 0.396, p < 0.01), aligning with the hypothesized affective and evaluative pathways. Notably, switching barriers displayed a consistent negative relationship with most constructs, particularly with usage intention (r = −0.253, p < 0.01), suggesting a potential dampening effect on behavioral commitment. These initial correlations support the proposed model structure and validate the theoretical decision to include satisfaction as a mediator and switching barriers as a moderator. Given that all constructs were measured on seven-point Likert scales, the ordinal nature of the data was further examined using Spearman’s rank-order correlation coefficients. The Spearman results yielded a pattern consistent with the Pearson coefficients reported above, thereby confirming the robustness and stability of the inter-construct relationships.

4.2. Confirmatory Factor Analysis (CFA) and Model Fit

To assess the reliability and structural validity of the measurement model, confirmatory factor analysis (CFA) was conducted using Mplus 8.3 on six core constructs: functional value, cognitive value, emotional value, satisfaction, usage intention, and switching barriers. Model fit was evaluated using multiple indices, including χ2/df, CFI, TLI, RMSEA, and SRMR. The model was deemed acceptable when χ2/df ≤ 5, CFI and TLI ≥ 0.90, and RMSEA and SRMR ≤ 0.08, following established criteria [46]. The CFA showed acceptable fit across all constructs, confirming convergent and discriminant validity (see Table 2 for detailed indices and factor loadings). Furthermore, discriminant validity was established by ensuring that the square root of AVE for each construct exceeded the inter-construct correlations [57]. Collectively, these findings confirm that the measurement model possesses sufficient reliability, convergent validity, and discriminant validity, thereby providing a robust foundation for subsequent structural equation modeling (SEM). As reported in Table 2, the square roots of AVE for all constructs exceed their inter-construct correlations, demonstrating adequate discriminant validity and confirming that functional value, cognitive value, emotional value, satisfaction, usage intention, and switching barriers are conceptually distinct within the measurement model. (For more detailed statistical results on Convergent Validity and Discriminant Validity, refer to Tables S4–S9 of the Supplementary S1 and Figures S1–S4 of the Supplementary S2.)

4.3. Structural Equation Modeling (SEM)

To examine the hypothesized relationships among perceived value dimensions, user satisfaction, usage intention, and switching barriers, structural equation modeling (SEM) was conducted using Mplus 8.3. The model incorporated the direct effects of functional, cognitive, and emotional value on usage intention, the mediating role of satisfaction, and the moderating role of switching barriers. As the model was saturated (df = 0), overall fit indices were not reported, and the analysis focused on the significance of individual path coefficients. The directional paths specified in the SEM model—linking perceived value to satisfaction and subsequently to usage intention—were theoretically grounded in prior research on service value and post-adoption behavior. Previous studies have consistently demonstrated that perceived value serves as a cognitive–affective antecedent of satisfaction, which in turn predicts behavioral intention [14,23,58]. This theoretical foundation supports the directional assumptions tested in the present model.

4.4. Direct Effects

This section presents the direct effects of perceived value dimensions on user satisfaction and usage intention. The structural model yielded mixed results. Functional value did not significantly predict satisfaction (β = 0.055, p = 0.504), thus failing to support H1. This suggests that offering practical functionality alone may be insufficient to enhance users’ overall satisfaction. In contrast, cognitive value had a significant positive effect on satisfaction (β = 0.379, p < 0.001), supporting H2 and indicating that users’ perceptions of the advisor’s intelligence and expertise are critical to generating a positive evaluative response. Emotional value also significantly predicted satisfaction (β = 0.308, p < 0.001), lending support to H3 and emphasizing the role of affective engagement and perceived warmth in shaping user experiences. The complete structural model and standardized path coefficients are presented in Figure 2.
Regarding the direct effects on usage intention, only emotional value emerged as a significant predictor (β = 0.345, p < 0.001), supporting H6. Neither functional value (β = 0.025, p = 0.715) nor cognitive value (β = 0.132, p = 0.107) showed a significant effect, leading to the rejection of H4 and H5. These results suggest that emotional resonance plays a more influential role than utilitarian or cognitive appraisals in driving continued engagement with AI digital human advisors. Satisfaction also significantly predicted usage intention (β = 0.310, p < 0.001), supporting H7 and aligning with previous findings that identify satisfaction as a robust post-adoption driver of behavioral intention.
A bootstrapping analysis based on 5000 resamples was conducted to assess the mediating role of satisfaction. For cognitive value, the indirect effect through satisfaction was significant (β = 0.073, p = 0.023, 95% CI [0.010, 0.136]), while the direct effect was not, indicating full mediation and supporting H9. Satisfaction accounted for 35.6% of the total effect, emphasizing the role of perceived intelligence and clarity in fostering behavioral engagement. Emotional value demonstrated both a direct effect (β = 0.345, p < 0.001) and an indirect effect through satisfaction (β = 0.079, p = 0.012, 95% CI [0.018, 0.140]), indicating partial mediation and supporting H10, with satisfaction contributing 18.6% to the total effect. In contrast, functional value had neither a significant direct nor indirect effect (β = 0.010, p = 0.402), and the confidence interval included zero, thereby failing to support H8. These findings confirm that emotional engagement, rather than utilitarian features, plays a decisive role in shaping satisfaction and behavioral intention toward AI Digital Human Advisors.

4.5. Mediation Analysis

This section elaborates on the psychological mechanism through which satisfaction mediates the relationship between perceived value and usage intention. Table 3 presents the standardized indirect effects derived from the bootstrapping analysis. Satisfaction was found to fully mediate the effect of cognitive value on usage intention and to partially mediate the effect of emotional value, highlighting the central role of both informational clarity and affective engagement in shaping post-adoption behavior. In contrast, functional value showed no significant indirect effect through satisfaction, suggesting that utilitarian benefits alone are insufficient to drive continued use. These findings reinforce the view of satisfaction as a context-dependent psychological conduit that translates perceived cognitive and emotional value into behavioral commitment.

4.6. Moderation Analysis

This section examines the moderating role of switching barriers in the relationship between satisfaction and usage intention. To test this boundary condition, a moderated mediation model was estimated incorporating an interaction term between satisfaction and switching barriers. The results revealed a significant negative interaction effect (β = −0.307, p < 0.001), and the moderated mediation index was also significant (−0.255, p < 0.01), thereby supporting H11. The detailed regression results are presented in Table 4, and the interaction effect is graphically depicted in Figure 3. (Refer to Table S10 of the Supplementary S1 for the simple slope test results of Switching Barriers.)
Bootstrapped conditional indirect effects further corroborated the moderated mediation pattern. When switching barriers were low, both cognitive and emotional value significantly predicted usage intention via satisfaction (β = 0.224 and β = 0.242, respectively), indicating that these value perceptions primarily activate intention through satisfaction. However, under high switching barriers, the indirect effects became non-significant (β = −0.030 and β = −0.033, p > 0.40), confirming that switching costs reduce the psychological impact of satisfaction on intention formation. In contrast, functional value had no significant indirect effects at either level of switching barriers. These conditional effects are summarized in Table 5.
These findings confirm that switching barriers significantly weaken the positive effect of satisfaction on usage intention. This suggests that satisfaction functions as a conditional predictor—its behavioral impact depends on users’ perceived freedom to switch. When switching resistance is high, even satisfied users may hesitate to commit due to emotional detachment, cognitive friction, or platform inertia. This supports the conceptualization of switching barriers as a form of psychological friction that distorts the value-to-intention pathway. From a managerial perspective, digital finance platforms must reduce perceived switching barriers—through better usability, personalized support, and seamless migration features—to fully capitalize on the satisfaction they generate. Otherwise, high satisfaction may still fail to translate into sustained adoption.
Table 6 provides a comprehensive summary of the hypothesis test results. H2, H3, H6, H7, H9, H10, and H11 were supported, while H1, H4, H5, and H8 were not. These results emphasize the pivotal role of emotional engagement and satisfaction in driving user behavior, particularly in the context of AI digital human advisors. Moreover, the findings highlight the moderating effect of switching barriers, which inhibit the satisfaction–intention link and consequently impact sustained user engagement in the digital financial ecosystem. Overall, these results highlight that while satisfaction remains a strong predictor of usage intention, its effect is conditional upon users’ perceived switching freedom, underscoring the behavioral constraints in AI-mediated financial contexts.

5. Discussion

5.1. Theoretical Implications

This study enhances the understanding of user behavior in AI-mediated financial services by examining how different types of perceived value—functional, cognitive, and emotional—affect satisfaction and usage intention. While previous research has highlighted the importance of perceived value in service adoption [59], our findings extend this framework to AI digital human advisors. The results underscore the dominant role of cognitive and emotional value over functional value. Cognitive evaluations—focused on the AI advisor’s professionalism and expertise—showed significant indirect effects through satisfaction, even when direct effects on usage intention were not observed. Emotional value emerged as the most influential factor, both directly and indirectly affecting satisfaction and usage intention. Users’ perceptions of the advisor’s emotional warmth and intelligence foster affective engagement, which is critical for sustained behavioral intention in AI-mediated services. These findings are consistent with prior research on AI adoption and user behavior in digital financial services [4,5,6], which also emphasize the predominance of emotional and cognitive evaluations over functional utility. However, they differ from earlier service-adoption studies in traditional digital contexts [17,60] that highlighted utilitarian efficiency as the principal driver of satisfaction. This divergence may arise from the anthropomorphic and affective nature of AI Digital Human Advisors, whose perceived empathy and competence outweigh pure functionality in shaping user engagement. These comparative insights strengthen the external validity of our findings and demonstrate how affective and cognitive appraisals redefine value perception in AI-mediated financial interactions. The results further indicate that cognitive and emotional value represent higher-order evaluative processes that go beyond functional utility. Cognitive value captures users’ perceptions of the advisor’s expertise, accuracy, and informational clarity, while emotional value reflects warmth, empathy, and affective resonance during human–AI interaction. These findings imply that, in digital financial contexts, users interpret value through psychologically enriched and socially embedded experiences rather than through instrumental performance alone. This supports the emerging view that emotional and cognitive engagement constitute the core mechanisms of trust and sustained intention in AI-mediated financial services, thereby extending conventional models of perceived value and post-adoption behavior. These findings align with the S–O–R (Stimulus–Organism–Response) framework, suggesting that emotional and intellectual engagement act as core “organism” variables mediating external stimuli (AI design) and user responses (behavioral intention). These directional relationships are consistent with prior empirical studies in service marketing and information systems, which conceptualize perceived value as a formative driver of satisfaction and satisfaction as the primary determinant of continued usage intention [22,23,58]. Accordingly, the causal paths proposed and tested in this study are theoretically well supported.
In contrast, functional value—although traditionally emphasized in digital service literature—did not predict either satisfaction or behavioral intention. This null finding may reflect a threshold effect, where users consider basic functionality a given in contemporary digital financial systems, thus diminishing its marginal impact on satisfaction or commitment. It may also suggest a mismatch between users’ perceptions of ‘functional’ value and what AI advisors actually offer—users may prioritize emotional connection and cognitive clarity over technical efficiency in financial interactions. Moreover, the negative moderating effect of switching barriers on the satisfaction–intention link suggests that satisfaction alone is not universally effective in generating behavioral intention. Specifically, switching barriers weaken—rather than reverse—the positive relationship between satisfaction and usage intention, meaning that users may feel restricted from acting on their satisfaction when switching costs are high. When switching costs are perceived as high, even satisfied users may feel constrained, echoing insights from expectancy-disconfirmation theory [22] and recent research on lock-in phenomena [46]. This supports H11 and introduces switching barriers as a critical boundary condition for satisfaction-based models.

5.2. Practical Implications

The study yields several practical insights for developers and managers of AI digital human advisors in financial services. First, design efforts should emphasize cognitive and emotional dimensions of user experience. While functionality is essential, it no longer differentiates user experience or influences post-adoption behavior. Instead, AI agents should be designed to project expertise, communicate empathetically, and build rapport, especially in contexts requiring sustained engagement.
Second, organizations should pay attention to the psychological freedom users perceive in switching services. Our moderation analysis revealed that satisfaction leads to usage intention only when users feel unconstrained. Thus, lowering switching barriers—by simplifying account migration, offering seamless user interfaces, or enhancing data portability—can strengthen the behavioral impact of satisfaction.
Third, the consistent failure of functional value to predict satisfaction or intention across models suggests that AI agents should avoid overemphasizing task efficiency at the expense of human-like interaction. Cognitive richness and emotional resonance emerge as better predictors of user loyalty.
Finally, the validated structural model supports the use of satisfaction as a partial mediator for emotional value and a full mediator for cognitive value. Developers should use this insight to guide the placement of emotional and intellectual stimuli in user journeys—for example, by offering AI-powered financial education alongside emotionally supportive dialog systems. These implications are also aligned with emerging managerial insights from recent AI-service studies [61], which stress that emotionally intelligent and cognitively transparent system design is essential for fostering long-term trust and continued use.

5.3. Limitations and Future Research Directions

Several limitations offer promising directions for future research. First, this study employed cross-sectional survey data, which restricts inferences about temporal dynamics. Longitudinal or experimental designs could capture how users’ perceptions of AI advisors evolve over repeated interactions and usage cycles. Second, while switching barriers were shown to moderate the effect of satisfaction, they were operationalized through three static subdimensions. Future work could investigate dynamic switching costs that emerge from habit formation, platform lock-in, or regulatory changes. Third, although cognitive and emotional value were identified as strong predictors, this study did not examine other relational constructs such as trust, anthropomorphism, or personalization as potential mediators or moderators. Incorporating these variables—rooted in the CASA framework [62]—could yield deeper insight into value formation and user engagement mechanisms. Finally, the data were collected from urban users in mainland China, where digital-finance ecosystems are comparatively advanced. Future cross-cultural studies should test whether similar value hierarchies and switching-cost sensitivities hold in regions with differing levels of fintech maturity and financial inclusion [63]. In addition, the present sample primarily consisted of urban users of digital financial services, reflecting China’s concentration of fintech adoption in metropolitan regions. This urban bias limits the generalizability of the findings to populations in rural or semi-urban areas, where access to technology, financial literacy, and digital infrastructure may differ substantially. Users in less developed regions may evaluate AI Digital Human Advisors based on different value priorities or face distinct psychological and contextual constraints. Future research should therefore extend data collection to rural populations to examine how varying levels of digital maturity and socioeconomic context influence perceived value formation, satisfaction, and continued usage intention.

6. Conclusions

This study examined how perceived value influences user satisfaction and usage intention toward AI Digital Human Advisors in digital finance, drawing on the Stimulus–Organism–Response (S–O–R) framework. By integrating functional, cognitive, and emotional value as key dimensions of perceived value, the study found that cognitive and emotional value significantly enhance both satisfaction and usage intention, whereas functional value exerts no direct effect. Satisfaction fully mediates the relationship between cognitive value and intention and partially mediates that of emotional value, confirming its pivotal psychological role. Furthermore, switching barriers negatively moderate the satisfaction–intention link, suggesting that satisfaction does not automatically translate into behavioral commitment when users perceive high friction or limited alternatives. Theoretically, these findings extend perceived value theory into AI-mediated financial contexts, demonstrating that users’ cognitive and emotional engagement—rather than functional efficiency—drives sustained behavioral intention. The results also refine the S–O–R framework by identifying switching barriers as a boundary condition that shapes satisfaction-based responses. Practically, the study highlights that AI Digital Human Advisors should be designed to convey emotional warmth, relational empathy, and cognitive clarity to foster user trust, satisfaction, and continued engagement. Future research can build on these findings by adopting longitudinal and cross-cultural approaches to explore how digital maturity and socioeconomic diversity affect user perceptions and behavioral patterns.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/systems13110973/s1, Supplementary S1: Table S1: Correlation Analysis of Variables; Table S2: Reliability Analysis of Variables; Table S3: KMO and Bartlett’s Test of Sphericity Results; Table S4: Convergent Validity of Perceived Value Construct in Confirmatory Factor Analysis; Table S5: Discriminant Validity Test of Perceived Value; Table S6: Convergent Validity of Satisfaction in Confirmatory Factor Analysis; Table S7: Convergent Validity of Usage Intention in Confirmatory Factor Analysis; Table S8: Convergent Validity of Switching Barriers in Confirmatory Factor Analysis; Table S9: Discriminant Validity Test of Switching Barriers; Table S10: Simple Slope Test Results; Supplementary S2: Figure S1: Confirmatory Factor Analysis of Perceived Value Scale; Figure S2: Confirmatory Factor Analysis of Satisfaction Scale; Figure S3: Confirmatory Factor Analysis of Usage Intention Scale; Figure S4: Confirmatory Factor Analysis of Switching Barriers Scale; Supplementary S3: Questionnaire.

Author Contributions

Conceptualization, Y.T. and H.S.; methodology, Y.T.; validation, Y.T. and H.S.; formal analysis, Y.T.; investigation, Y.T.; resources, H.S.; data curation, Y.T.; writing—original draft preparation, Y.T.; writing—review and editing, Y.T. and H.S.; supervision, H.S.; project administration, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study adhered to the Declaration of Helsinki guidelines. The nature of the research and the type of data collected did not involve sensitive information, as defined under Article 23 of the Personal Information Protection Act of Korea; therefore, approval from an Ethics Committee was not required.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study can be requested from the authors. The data are not publicly available due to privacy or ethical restrictions.

Acknowledgments

The authors would like to thank Pukyong National University for administrative support.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Measurement Items and Sources.
Table A1. Measurement Items and Sources.
ConstructCodeMeasurement ItemSource
Functional ValueFV1The AI digital human advisor provides comprehensive and complete financial information.[7,64]
FV2The AI digital human advisor enhances the quality of service.
FV4The interactive interface of AI digital human advisors provides good visual design and usability.
FV5I believe AI digital human advisors represent an innovative form of financial service.
Cognitive ValueCV1AI digital human advisors help me better understand financial products or services.[8]
CV2AI digital human advisors provide more objective information.
CV3AI digital human advisors provide more accurate information.
CV4AI digital human advisors offer a new way to learn about financial products and services.
Emotional ValueEV1Using AI digital human advisors makes me feel relaxed and happy.[9,65]
EV2Watching AI digital human advisors explain financial products is very interesting.
EV4The personification of AI digital consultants does not make me uncomfortable.
EV5The AI digital human advisor captured my attention.
SatisfactionDS1I am satisfied with my experience using AI digital human advisors.[26,58]
DS3AI digital human advisors meet my financial consultation needs well.
DS4Overall, I am satisfied with the services provided by AI digital human advisors.
DS5I would be happy to continue relying on AI digital human advisors for financial services.
Switching Barriers
(Switching Cost/Alternative Attractiveness/Habit strength)
LB1Adapting to AI digital human advisors requires considerable time and effort.[46,51]
LB2Using AI digital human advisors feels complicated and hard to operate.
LB3Switching to AI digital human advisors causes me to lose part of my previous service habits.
LB4If I had to replace human advisors, I would not consider AI digital human advisors a better option.
LB6I believe AI digital human advisors can hardly fully replace human advisors.
LB7I have become accustomed to using human customer service or human financial advisors.
LB8I prefer communicating with traditional financial advisors rather than switching to AI.
LB9While using the AI digital human advisor for financial management, I felt that my financial habits have changed.
Usage IntentionWW1I am willing to continue using AI digital human advisors in the future.[33,66]
WW2I will actively follow AI digital human advisor services on major financial platforms and use them more frequently.
WW3I am willing to speak positively about AI digital human advisor services to friends and family.
WW4I would recommend AI digital human advisors to others.
WW5I am likely to explore more financial services provided by AI digital human advisors in the future.
Table A2. Descriptive Statistical Analysis of Study Variables (N = 524).
Table A2. Descriptive Statistical Analysis of Study Variables (N = 524).
VariableMinMaxM ± SDSkewnessKurtosis (−3)
Functional Value1.257.004.584 ± 1.271−0.389−0.574
Cognitive Value1.257.004.786 ± 1.109−0.5330.432
Emotional Value1.007.004.669 ± 1.301−0.430−0.081
Satisfaction1.007.005.070 ± 1.268−0.7900.130
Switching Barriers1.507.005.011 ± 1.054−0.8260.823
Usage Intention1.007.004.827 ± 1.292−0.5880.009
Note. SD = Standard Deviation. Source: Authors’ calculations based on the survey data (N = 524).
Table A3. Gender Differences in Variables (N = 524).
Table A3. Gender Differences in Variables (N = 524).
VariableM ± SD (Male, N = 226)M ± SD (Female, N = 298)tp
Functional Value4.50 ± 1.294.65 ± 1.26−1.2930.197
Cognitive Value4.63 ± 1.094.90 ± 1.11−2.7980.005
Emotional Value4.52 ± 1.414.78 ± 1.20−2.2280.026
Satisfaction4.99 ± 1.315.13 ± 1.23−1.3120.190
Switching Barriers5.05 ± 1.084.98 ± 1.040.7110.477
Usage Intention4.67 ± 1.354.95 ± 1.23−2.5000.013
Note. SD = Standard Deviation. Source: Authors’ calculations based on the survey data (N = 524).
Table A4. Age Differences in Variables (N = 524).
Table A4. Age Differences in Variables (N = 524).
VariableM ± SD (20–29 Years, N = 126)M ± SD (30–39 Years, N = 182)M ± SD (40–49 Years, N = 103)M ± SD (50–59 Years, N = 99)M ± SD (≥60 Years, N = 14)Fp
Functional Value4.42 ± 1.234.69 ± 1.264.65 ± 1.264.50 ± 1.334.79 ± 1.481.0880.362
Cognitive Value4.59 ± 1.064.82 ± 1.074.99 ± 1.004.71 ± 1.295.21 ± 1.122.6170.034
Emotional Value4.61 ± 1.294.70 ± 1.304.78 ± 1.294.54 ± 1.314.80 ± 1.470.5780.679
Satisfaction4.92 ± 1.255.07 ± 1.265.24 ± 1.165.11 ± 1.354.91 ± 1.720.9600.429
Switching Barriers4.99 ± 0.844.91 ± 1.124.97 ± 1.105.20 ± 1.125.50 ± 0.932.0090.092
Usage Intention4.78 ± 1.274.87 ± 1.345.06 ± 1.144.63 ± 1.314.27 ± 1.472.2040.067
Note. SD = Standard Deviation. Source: Authors’ calculations based on the survey data (N = 524).
Table A5. Differences in Variables by Education Level (N = 524).
Table A5. Differences in Variables by Education Level (N = 524).
VariableM ± SD (High School or Below, N = 15)M ± SD (Associate Degree, N = 63)M ± SD (Bachelor’s Degree, N = 354)M ± SD (Master’s Degree and Above, N = 92)Fp
Functional Value4.97 ± 1.114.69 ± 1.284.56 ± 1.284.54 ± 1.270.6710.570
Cognitive Value4.48 ± 1.034.87 ± 1.214.80 ± 1.054.72 ± 1.260.6410.589
Emotional Value5.10 ± 1.284.69 ± 1.274.69 ± 1.294.49 ± 1.351.2050.307
Satisfaction5.53 ± 0.654.98 ± 1.305.07 ± 1.295.07 ± 1.230.7820.504
Switching Barriers4.77 ± 1.255.08 ± 1.155.00 ± 1.045.04 ± 1.000.3920.759
Usage Intention5.00 ± 1.254.84 ± 1.154.83 ± 1.344.79 ± 1.200.1150.951
Note. SD = Standard Deviation. Source: Authors’ calculations based on the survey data (N = 524).

References

  1. Belanche, D.; Casaló Ariño, L.; Flavian, C.; Schepers, J. Service robot implementation: A theoretical framework and research agenda. Serv. Ind. J. 2020, 40, 203–225. [Google Scholar] [CrossRef]
  2. Mehrabian, A.; Russell, J.A. An Approach to Environmental Psychology; The MIT Press: Cambridge, MA, USA, 1974. [Google Scholar]
  3. Jones, M.A.; Mothersbaugh, D.L.; Beatty, S.E. Why customers stay: Measuring the underlying dimensions of services switching costs and managing their differential strategic outcomes. J. Bus. Res. 2002, 55, 441–450. [Google Scholar] [CrossRef]
  4. Zhu, H.; Vigren, O.; Söderberg, I.-L. Implementing artificial intelligence empowered financial advisory services: A literature review and critical research agenda. J. Bus. Res. 2024, 174, 114494. [Google Scholar] [CrossRef]
  5. Khanna, P.; Jha, S. AI-Based Digital Product ‘Robo-Advisory’ for Financial Investors. Vis. J. Bus. Perspect. 2024. [Google Scholar] [CrossRef]
  6. Zaman, T.; Ahmed, S.; Lee, J. Engaging roboadvisors in financial advisory services: The role of perceived value for user adoption. Inf. Dev. 2023, 39, 145–161. [Google Scholar] [CrossRef]
  7. Sánchez, J.; Callarisa, L.; Rodríguez, R.M.; Moliner, M.A. Perceived value of the purchase of a tourism product. Tour. Manag. 2006, 27, 394–409. [Google Scholar] [CrossRef]
  8. Sheth, J.N.; Newman, B.I.; Gross, B.L. Why we buy what we buy: A theory of consumption values. J. Bus. Res. 1991, 22, 159–170. [Google Scholar] [CrossRef]
  9. Sweeney, J.C.; Soutar, G.N. Consumer perceived value: The development of a multiple item scale. J. Retail. 2001, 77, 203–220. [Google Scholar] [CrossRef]
  10. Feine, J.; Gnewuch, U.; Morana, S.; Maedche, A. A Taxonomy of Social Cues for Conversational Agents. Int. J. Hum.-Comput. Stud. 2019, 132, 138–161. [Google Scholar] [CrossRef]
  11. Gnewuch, U.; Morana, S.; Maedche, A. Towards Designing Cooperative and Social Conversational Agents for Customer Service. In Proceedings of the International Conference on Interaction Sciences 2017, New York, NY, USA, 27–30 June 2017. [Google Scholar]
  12. Byambaa, O.; Yondon, C.; Rentsen, E.; Darkhijav, B.; Rahman, M. An empirical examination of the adoption of artificial intelligence in banking services: The case of Mongolia. Future Bus. J. 2025, 11, 76. [Google Scholar] [CrossRef]
  13. Dimitriadis, K.A.; Koursaros, D.; Savva, C.S. Exploring the dynamic nexus of traditional and digital assets in inflationary times: The role of safe havens, tech stocks, and cryptocurrencies. Econ. Model. 2025, 151, 107195. [Google Scholar] [CrossRef]
  14. Zeithaml, V. Consumer Perceptions of Price, Quality and Value: A Means-End Model and Synthesis of Evidence. J. Mark. 1988, 52, 2–22. [Google Scholar] [CrossRef]
  15. Monroe, K.B. Pricing: Making Profitable Decisions, 3rd ed.; McGraw-Hill: New York, NY, USA, 2003. [Google Scholar]
  16. Woodruff, R.B. Customer value: The next source for competitive advantage. J. Acad. Mark. Sci. 1997, 25, 139–153. [Google Scholar] [CrossRef]
  17. Kim, H.; Park, S. Machines think, but do we? Embracing AI in flat organizations. Inf. Dev. 2024, 40, 25–42. [Google Scholar] [CrossRef]
  18. Lee, J.; Xiong, Y. Investigating the factors influencing the adoption and use of artificial agents in higher education. Inf. Dev. 2024, 40, 87–102. [Google Scholar] [CrossRef]
  19. Costa, R.; Cruz, M.; Gonçalves, R.; Dias, Á.; da Silva, R.V.; Pereira, L. Artificial intelligence and its adoption in financial services. Int. J. Serv. Oper. Inf. 2022, 12, 70. [Google Scholar] [CrossRef]
  20. Ali, O.; Murray, P.A.; Al-Ahmad, A.; Jeon, I.; Dwivedi, Y.K. Comprehending the theoretical knowledge and practice around AI-enabled innovations in the finance sector. J. Innov. Knowl. 2025, 10, 100762. [Google Scholar] [CrossRef]
  21. Kanaparthi, V. AI-based Personalization and Trust in Digital Finance. arXiv 2024, arXiv:2401.15700. [Google Scholar] [CrossRef]
  22. Oliver, R.L. A Cognitive Model of the Antecedents and Consequences of Satisfaction Decisions. J. Mark. Res. 1980, 17, 460–469. [Google Scholar] [CrossRef] [PubMed]
  23. Cronin, J.J.; Brady, M.K.; Hult, G.T.M. Assessing the effects of quality, value, and customer satisfaction on consumer behavioral intentions in service environments. J. Retail. 2000, 76, 193–218. [Google Scholar] [CrossRef]
  24. Hu, H.-H.; Kandampully, J.; Juwaheer, T.D. Relationships and impacts of service quality, perceived value, customer satisfaction, and image: An empirical study. Serv. Ind. J. 2009, 29, 111–125. [Google Scholar] [CrossRef]
  25. Lee, C.-K.; Yoon, Y.-S.; Lee, S.-K. Investigating the relationships among perceived value, satisfaction, and recommendations: The case of the Korean DMZ. Tour. Manag. 2007, 28, 204–214. [Google Scholar] [CrossRef]
  26. MacKenzie, S.; Olshavsky, R. A Reexamination of the Determinants of Consumer Satisfaction. J. Mark. 1996, 60, 15–32. [Google Scholar] [CrossRef]
  27. Jacoby, J. Stimulus-Organism-Response Reconsidered: An Evolutionary Step in Modeling (Consumer) Behavior. J. Consum. Psychol. 2002, 12, 51–57. [Google Scholar] [CrossRef]
  28. Ajzen, I. From Intentions to Actions: A Theory of Planned Behavior. In Action Control: From Cognition to Behavior; Kuhl, J., Beckmann, J., Eds.; Springer: Berlin/Heidelberg, Germany, 1985; pp. 11–39. [Google Scholar]
  29. Venkatesh, V.; Thong, J.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  30. Sokolova, K.; Perez, C. You follow fitness influencers on YouTube. But do you actually exercise? How parasocial relationships, and watching fitness influencers, relate to intentions to exercise. J. Retail. Consum. Serv. 2021, 58, 102276. [Google Scholar] [CrossRef]
  31. Chen, C.-C.; Lin, Y.-C. What drives live-stream usage intention? The perspectives of flow, entertainment, social interaction, and endorsement. Telemat. Inform. 2018, 35, 293–303. [Google Scholar] [CrossRef]
  32. Wang, Z.; Yuan, R.; Li, B.; Kumar, V.; Kumar, A. An Empirical Study of AI Financial Advisor Adoption Through Technology Vulnerabilities in the Financial Context. J. Prod. Innov. Manag. 2025, 10, 12795. [Google Scholar] [CrossRef]
  33. Salisbury, W.; Pearson, R.; Pearson, A.; Miller, D. Perceived security and World Wide Web purchase intention. Ind. Manag. Data Syst. 2001, 101, 165–177. [Google Scholar] [CrossRef]
  34. Cardozo, R.N. An Experimental Study of Customer Effort, Expectation, and Satisfaction. J. Mark. Res. 1965, 2, 244–249. [Google Scholar] [CrossRef]
  35. Zeithaml, V.A.; Berry, L.L.; Parasuraman, A. The behavioral consequences of service quality. J. Mark. 1996, 60, 31–46. [Google Scholar] [CrossRef]
  36. Kim, D.; Ferrin, D.; Rao, R. Trust and Satisfaction, Two Stepping Stones for Successful E-Commerce Relationships: A Longitudinal Exploration. Inf. Syst. Res. 2009, 20, 237–257. [Google Scholar] [CrossRef]
  37. Chen, S.-C.; Liu, M.-L.; Lin, C.-P. Integrating Technology Readiness into the Expectation-Confirmation Model: An Empirical Study of Mobile Services. Cyberpsychol. Behav. Soc. Netw. 2013, 16, 604–612. [Google Scholar] [CrossRef]
  38. Barjaktarovic Rakocevic, S.; Rakic, N.; Rakocevic, R. An Interplay Between Digital Banking Services, Perceived Risks, Customers’ Expectations, and Customers’ Satisfaction. Risks 2025, 13, 39. [Google Scholar] [CrossRef]
  39. Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum.-Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  40. Wirtz, J.; Patterson, P.G.; Kunz, W.H.; Gruber, T.; Lu, V.N.; Paluch, S.; Martins, A. Brave new world: Service robots in the frontline. J. Serv. Manag. 2018, 29, 907–931. [Google Scholar] [CrossRef]
  41. Cao, J.; Meng, T. Ethics and governance of artificial intelligence in digital China: Evidence from online survey and social media data. Chin. J. Sociol. 2025, 11, 58–89. [Google Scholar] [CrossRef]
  42. Huynh, M.-T.; Aichner, T. In generative artificial intelligence we trust: Unpacking determinants and outcomes for cognitive trust. AI Soc. 2025. [Google Scholar] [CrossRef]
  43. Anderson, E.W.; Fornell, C.; Lehmann, D.R. Customer Satisfaction, Market Share, and Profitability: Findings from Sweden. J. Mark. 1994, 58, 53–66. [Google Scholar] [CrossRef]
  44. Yum, K.; Kim, J. The Influence of Perceived Value, Customer Satisfaction, and Trust on Loyalty in Entertainment Platforms. Appl. Sci. 2024, 14, 5763. [Google Scholar] [CrossRef]
  45. Fornell, C. A National Customer Satisfaction Barometer: The Swedish Experience. J. Mark. 1992, 56, 6–21. [Google Scholar] [CrossRef]
  46. Jones, M.A.; Mothersbaugh, D.L.; Beatty, S.E. Switching barriers and repurchase intentions in services. J. Retail. 2000, 76, 259–274. [Google Scholar] [CrossRef]
  47. Ouyang, L.; Mak, J. Examining how switching barriers moderate the link between customer satisfaction and repurchase intention in the health and fitness club sector. Asia Pac. J. Mark. Logist. 2025. [Google Scholar] [CrossRef]
  48. Lee, M.; Lee, J.; Feick, L. The impact of switching barriers on the customer satisfaction–loyalty link: Mobile phone service in France. J. Serv. Mark. 2001, 15, 35–48. [Google Scholar] [CrossRef]
  49. Verplanken, B.; Aarts, H. Habit, Attitude, and Planned Behaviour: Is Habit an Empty Construct or an Interesting Case of Automaticity? Eur. Rev. Soc. Psychol. 2011, 10, 101–134. [Google Scholar] [CrossRef]
  50. Evanschitzky, H.; Stan, V.; Nagengast, L. Strengthening the satisfaction loyalty link: The role of relational switching costs. Mark. Lett. 2022, 33, 293–310. [Google Scholar] [CrossRef]
  51. Chuang, Y.-F. Pull-and-suck effects in Taiwan mobile phone subscribers switching intentions. Telecommun. Policy 2011, 35, 128–140. [Google Scholar] [CrossRef]
  52. Kim, J.; Lennon, S. Effects of reputation and website quality on online consumers’ emotion, perceived risk and purchase intention: Based on the stimulus-organism-response model. J. Res. Interact. Mark. 2013, 7, 33–56. [Google Scholar] [CrossRef]
  53. Albarq, A. Effect of Web atmospherics and satisfaction on purchase behavior: Stimulus–organism–response model. Future Bus. J. 2021, 7, 62. [Google Scholar] [CrossRef]
  54. iiMedia Research. White Paper on the Development of China’s Virtual Digital Human Industry in 2024; iiMedia Research Group: Guangzhou, China, 2024. [Google Scholar]
  55. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 8th ed.; Cengage Learning: Andover, MA, USA, 2021. [Google Scholar]
  56. Hu, L.T.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. A Multidiscip. J. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  57. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  58. Bhattacherjee, A. Understanding Information Systems Continuance: An Expectation-Confirmation Model. MIS Q. 2001, 25, 351–370. [Google Scholar] [CrossRef]
  59. Chen, Z.; Dubinsky, A.J. A conceptual model of perceived customer value in e-commerce: A preliminary investigation. Psychol. Mark. 2003, 20, 323–347. [Google Scholar] [CrossRef]
  60. Lin, C.-P.; Bhattacherjee, A. Extending technology usage models to interactive hedonic technologies: A theoretical model and empirical test. Inf. Syst. J. 2010, 20, 163–181. [Google Scholar] [CrossRef]
  61. Whang, J.B.; Song, J.H.; Choi, B.; Lee, J.-H. The effect of Augmented Reality on purchase intention of beauty products: The roles of consumers’ control. J. Bus. Res. 2021, 133, 275–284. [Google Scholar] [CrossRef]
  62. Nass, C.; Moon, Y. Machines and Mindlessness: Social Responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  63. Ho, C.-C.; MacDorman, K.F. Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Comput. Hum. Behav. 2010, 26, 1508–1518. [Google Scholar] [CrossRef]
  64. Parasuraman, A.P.; Zeithaml, V.; Malhotra, A. E-S-Qual: A Multiple-Item Scale for Assessing Electronic Service Quality. J. Serv. Res. 2005, 7, 213–233. [Google Scholar] [CrossRef]
  65. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. Extrinsic and intrinsic motivation to use computers in the workplace. J. Appl. Soc. Psychol. 1992, 22, 1111–1132. [Google Scholar] [CrossRef]
  66. Xu, J.; Benbasat, I.; Cenfetelli, R.T. Integrating Service Quality with System and Information Quality: An Empirical Test in the E-Service Context. MIS Q. 2013, 37, 777–794. [Google Scholar] [CrossRef]
Figure 1. Conceptual framework of user experience value and switching barriers in AI Digital Human Advisor services. The dotted box represents the three dimensions of perceived value: functional, cognitive, and emotional.
Figure 1. Conceptual framework of user experience value and switching barriers in AI Digital Human Advisor services. The dotted box represents the three dimensions of perceived value: functional, cognitive, and emotional.
Systems 13 00973 g001
Figure 2. Simplified SEM path model for AI Digital Human Advisors (standardized coefficients). Solid lines represent significant paths, and dotted lines represent non-significant paths. Note: *** p < 0.001; n.s. = not significant. Source: Authors’ calculations based on the study data (N = 524).
Figure 2. Simplified SEM path model for AI Digital Human Advisors (standardized coefficients). Solid lines represent significant paths, and dotted lines represent non-significant paths. Note: *** p < 0.001; n.s. = not significant. Source: Authors’ calculations based on the study data (N = 524).
Systems 13 00973 g002
Figure 3. Moderating Effect of Switching Barriers on the Relationship Between Satisfaction and Usage Intention. Source: Authors’ calculations based on the regression results from the survey data (N = 524).
Figure 3. Moderating Effect of Switching Barriers on the Relationship Between Satisfaction and Usage Intention. Source: Authors’ calculations based on the regression results from the survey data (N = 524).
Systems 13 00973 g003
Table 1. Frequency and Percentage Distribution of Demographic Information (N = 524).
Table 1. Frequency and Percentage Distribution of Demographic Information (N = 524).
NameOptionsFrequencyPercentage (%)
GenderMale22643.13
Female29856.87
Age20–29 years12624.05
30–39 years18234.73
40–49 years10319.66
50–59 years9918.89
60 years and above142.67
Highest EducationHigh School or below152.86
Associate Degree6312.02
Bachelor’s Degree35467.56
Master’s Degree and above9217.56
Monthly Income1500 yuan and below7213.74
1501 to 3000 yuan16932.25
3001–5000 yuan19336.83
5001 to 10,000 yuan7915.08
Over 10,000 yuan112.1
Frequency of Using Financial Apps MonthlyAlmost Never407.63
5–10 times13625.95
11–15 times7714.69
16–20 times6111.64
21–25 times417.82
26–30 times7614.5
Daily use9317.75
OccupationCorporate Employee33964.69
Individual/Independent13625.95
Student448.4
Other50.95
City of ResidenceFirst-tier City10219.47
New-tier City14928.44
Second-tier City17633.59
Other Cities9718.51
Total524100
Note. N = 524. Frequency and percentage values are based on valid survey responses. Source: Authors’ calculations based on the demographic survey data.
Table 2. Results of Convergent Validity Test.
Table 2. Results of Convergent Validity Test.
DimensionLatent and Observed VariablesFactor LoadingsAVECR
Functional ValueFV10.735
FV20.684
FV30.819
FV40.7890.580.84
Cognitive ValueCV10.633
CV20.812
CV30.714
CV40.8090.560.83
Emotional ValueEV10.768
EV20.779
EV30.788
EV40.7990.610.86
SatisfactionDS10.849
DS20.803
DS30.785
DS40.7700.640.88
Switching BarriersLB10.762
LB20.844
LB30.817
LB40.832
LB50.817
LB60.760
LB70.774
LB80.7800.600.85
Usage IntentionWW10.749
WW20.755
WW30.713
WW40.744
WW50.7040.540.85
Note. CR = Composite Reliability; AVE = Average Variance Extracted. Source: Authors’ calculations based on the survey data (N = 524).
Table 3. Standardized Results of Bootstrap Test for Mediation Effect.
Table 3. Standardized Results of Bootstrap Test for Mediation Effect.
Path RelationshipEffect TypeStandardized Effect Value95% CIEffect Proportion
Cognitive Value → Satisfaction → Usage IntentionDirect Effect0.132[−0.028, 0.292]-
Indirect Effect0.073[0.010, 0.136]35.6%
Total Effect0.205[0.043, 0.367]-
Emotional Value → Satisfaction → Usage IntentionDirect Effect0.345[0.187, 0.503]81.4%
Indirect Effect0.079[0.018, 0.140]18.6%
Total Effect0.424[0.268, 0.580]-
Functional Value → Satisfaction → Usage IntentionDirect Effect0.025[−0.108, 0.158]-
Indirect Effect0.010[−0.014, 0.034]-
Total Effect0.035[−0.100, 0.170]-
Note. CI = Confidence Interval. Indirect effects were derived from a bootstrap procedure with 5000 resamples. Source: Authors’ calculations based on the survey data (N = 524).
Table 4. Moderating Effect Analysis of Switching Barriers.
Table 4. Moderating Effect Analysis of Switching Barriers.
ModelVariableβp
Model 1: Satisfaction (M)Functional Value (X1)0.0510.352
Cognitive Value (X2)0.300 ***0.000
Emotional Value (X3)0.419 ***0.000
R20.408
Model 2: Usage Intention (Y)Functional Value (X1)0.0230.715
Cognitive Value (X2)0.1000.097
Emotional Value (X3)0.338 ***0.000
Satisfaction (M)0.184 **0.005
Switching Barriers (W)−0.152 *0.045
M × W (Interaction Term)−0.307 ***0.000
R20.367
Note. * p < 0.05 ** p < 0.01 *** p < 0.001. Source: Authors’ calculations based on the survey data (N = 524).
Table 5. Moderated Mediation Effect Parameters for Each Path.
Table 5. Moderated Mediation Effect Parameters for Each Path.
Mediated PathEffectBoot SEBoot CI Lower BoundBoot CI Upper Boundp
Low Switching Barriers Level
Cognitive Value → Satisfaction → Usage Intention0.2240.050.1260.3220.001
Emotional Value → Satisfaction → Usage Intention0.2420.0610.1220.3620.000
Functional Value → Satisfaction → Usage Intention0.0310.034−0.0360.0980.363
High Switching Barriers Level
Cognitive Value → Satisfaction → Usage Intention−0.0300.036−0.1000.0400.415
Emotional Value → Satisfaction → Usage Intention−0.0330.039−0.1090.0430.411
Functional Value → Satisfaction → Usage Intention−0.0040.021−0.0450.0370.615
Note. Boot SE = Bootstrapped Standard Error; CI = Confidence Interval. Source: Authors’ calculations based on the survey data (N = 524).
Table 6. Summary of Hypothesis Testing Results for All Variables.
Table 6. Summary of Hypothesis Testing Results for All Variables.
No.Research HypothesisTest Results
H1The functional value of AI digital human advisors positively influences user satisfaction.Not Supported
H2The cognitive value of AI digital human advisors positively influences user satisfaction.Supported
H3The emotional value of AI digital human advisors positively influences user satisfaction.Supported
H4The functional value of AI digital human advisors directly and positively influences users’ usage intention.Not Supported
H5The cognitive value of AI digital human advisors directly and positively influences users’ usage intention.Not Supported
H6The emotional value of AI digital human advisors directly and positively influences users’ usage intention.Supported
H7User satisfaction with AI Digital Human Advisors positively influences their usage intention.Supported
H8Satisfaction mediates the relationship between functional value and users’ usage intention.Not Supported
H9Satisfaction mediates the relationship between cognitive value and users’ usage intention.Supported
H10Satisfaction mediates the relationship between emotional value and users’ usage intention.Supported
H11Switching barriers negatively moderate the relationship between user satisfaction and usage intention.Supported
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, Y.; Son, H. How Perceived Value Drives Usage Intention of AI Digital Human Advisors in Digital Finance. Systems 2025, 13, 973. https://doi.org/10.3390/systems13110973

AMA Style

Tang Y, Son H. How Perceived Value Drives Usage Intention of AI Digital Human Advisors in Digital Finance. Systems. 2025; 13(11):973. https://doi.org/10.3390/systems13110973

Chicago/Turabian Style

Tang, Yishu, and Hosung Son. 2025. "How Perceived Value Drives Usage Intention of AI Digital Human Advisors in Digital Finance" Systems 13, no. 11: 973. https://doi.org/10.3390/systems13110973

APA Style

Tang, Y., & Son, H. (2025). How Perceived Value Drives Usage Intention of AI Digital Human Advisors in Digital Finance. Systems, 13(11), 973. https://doi.org/10.3390/systems13110973

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop