Next Article in Journal
From UGC to Brand Product Improvement: Mining Consumer Innovation Insights Across Social Media Platforms
Previous Article in Journal
Who Is to Blame? How Fulfillment and Integrity Harms Relate to Trust in Sellers and Platforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Psychological Ownership as a Boundary Condition in AI Recommendation: Credibility Formation and Behavioral Intentions in Electronic Commerce

Business School, Hankuk University of Foreign Studies, Dongdaemun-gu, Seoul 02450, Republic of Korea
J. Theor. Appl. Electron. Commer. Res. 2026, 21(2), 63; https://doi.org/10.3390/jtaer21020063
Submission received: 14 January 2026 / Revised: 11 February 2026 / Accepted: 11 February 2026 / Published: 12 February 2026
(This article belongs to the Section Digital Marketing and the Evolving Consumer Experience)

Abstract

AI recommendation agents increasingly mediate consumer decision-making in electronic commerce, yet algorithm-based agents often suffer credibility deficits relative to human sources. This research examines how recommendation agent type (human vs. AI) influences behavioral intentions through perceived credibility and how psychological ownership moderates this process. Across two controlled online experiments, consumers evaluated recommendations delivered by human or AI agents. Study 1 shows that baseline AI agents are perceived as less credible than human agents, while AI agents receiving minimal user involvement (naming) exhibit partially improved credibility, and that credibility mediates the effects of agent type on intention to use the system, recommendation acceptance, and purchase intention. Study 2 introduces a stronger psychological ownership manipulation through AI agent customization. Results indicate that customization strengthens psychological ownership, which reduces the credibility gap between AI and human agents and, when ownership is high, even allows AI agents to be evaluated as more credible. Conditional process analyses confirm that psychological ownership moderates both the effect of agent type on credibility and the indirect effects on behavioral intentions. Overall, the findings demonstrate that credibility toward AI recommendation agents is dynamically shaped by user–agent relational experiences. By integrating algorithm aversion, source credibility, and psychological ownership perspectives, this research advances understanding of consumer–AI interaction and provides design insights for AI-enabled recommendation systems in electronic commerce.

1. Introduction

Artificial intelligence (AI) has become a core infrastructural technology in electronic commerce, reshaping how consumers search for information, evaluate alternatives, and make purchase decisions. In particular, AI-based recommendation agents now play a pivotal role in e-commerce platforms by algorithmically curating product options tailored to individual users’ preferences and behavioral histories [1,2]. Prior research has consistently shown that recommendation systems can reduce cognitive effort, alleviate information overload, and enhance decision quality, thereby improving user satisfaction and purchase likelihood [3,4]. As personalization technologies continue to advance, recommendation agents have evolved from passive tools into interactive agents that actively shape consumer experiences within digital marketplaces [5].
Despite these advances, AI recommendation agents continue to face persistent evaluative disadvantages relative to human sources. A growing body of research documents systematic biases in how individuals evaluate algorithmic versus human advice, often favoring human judgment even when algorithmic recommendations are objectively superior. This phenomenon, commonly referred to as algorithm aversion, has been attributed to concerns about rigidity, lack of contextual sensitivity, or diminished social understanding [6,7]. Importantly, these credibility deficits persist even as algorithmic accuracy and explainability improve, suggesting that technical enhancements alone are insufficient to resolve consumers’ psychological resistance to AI agents. Related work further demonstrates that people are more willing to rely on algorithms when they perceive them as collaborative rather than authoritative, or when they can exercise some degree of agency over algorithmic outputs [8,9]. Together, these findings point to a tension between the efficiency gains offered by AI recommendation systems and consumers’ reluctance to fully accept algorithmic sources.
Trust and perceived credibility are central mechanisms underlying these divergent responses to AI recommendations. In electronic commerce contexts, source credibility has long been recognized as a key determinant of persuasion, shaping attitudes toward information sources and influencing subsequent behavioral intentions [10,11]. Within recommendation systems, users’ evaluations of an agent’s trustworthiness and expertise guide whether recommendations are accepted, ignored, or resisted [12,13,14,15]. Although trust and credibility are closely related, credibility represents an evaluative judgment about a source’s perceived competence and reliability, whereas trust reflects a broader willingness to accept vulnerability. In AI-mediated recommendation contexts, credibility is often the more immediate evaluative mechanism through which agent characteristics influence behavior. Recent studies further suggest that credibility perceptions often mediate the relationship between system characteristics and downstream behavioral outcomes in AI-driven environments [16,17].
However, existing research has paid comparatively limited attention to how user experience design mechanisms shape credibility formation toward AI recommendation agents. Prior studies have primarily focused on transparency, explainability, and accuracy as determinants of trust in algorithms [18,19]. What remains underexplored is how ownership-based design cues, which foster a psychological relationship between users and AI agents, differ fundamentally from explainability-based cues that emphasize system logic rather than self-related connection. In particular, the concept of psychological ownership, defined as the feeling that an object or entity is “mine,” has been shown to affect attitudes, responsibility, and valuation in both physical and digital domains [20,21]. Emerging evidence suggests that customization and user-initiated input can foster psychological ownership toward digital objects and systems, with meaningful consequences for user engagement and judgment [22,23,24]. Yet, how psychological ownership functions as a boundary condition shaping credibility toward AI recommendation agents remains insufficiently theorized.
Against this backdrop, the present research makes three contributions. First, it identifies perceived credibility as a central mediating mechanism through which recommendation agent type influences behavioral intentions. Second, it reconceptualizes psychological ownership as a boundary condition that alters credibility formation toward AI agents, rather than as a simple antecedent of trust or acceptance. Third, by integrating algorithm aversion, source credibility, and psychological ownership perspectives, this research demonstrates that AI credibility is relational and design-contingent rather than technologically fixed. Across two scenario-based experiments, the study tests a moderated mediation model in which credibility mediates the effects of agent type on recommendation acceptance, system use intention, and purchase intention, and in which psychological ownership conditions the strength of this mediation.

2. Theoretical Background and Hypotheses

2.1. Human vs. AI Recommendation Agents and Credibility

Recommendation agents can function as social and informational sources that influence consumer judgments and decisions in electronic commerce. Prior research distinguishes between human and algorithmic sources, suggesting that consumers apply different evaluative processes depending on the perceived nature of the agent providing advice [8,9]. Human recommendation agents are often perceived as more capable of understanding contextual nuances, empathizing with consumer needs, and exercising judgment grounded in social experience, which can enhance their perceived credibility [9]. In contrast, AI recommendation agents are frequently evaluated through a more mechanical lens, with consumers focusing on their computational logic rather than social competence.
A growing literature on human–algorithm interaction further shows that even when algorithmic recommendations are objectively accurate, consumers may discount them relative to human advice, particularly in domains perceived as subjective or preference-based [6,7]. This tendency reflects broader concerns that AI systems lack flexibility, contextual sensitivity, or humanlike understanding, all of which can undermine credibility perceptions [25,26]. In recommendation contexts, these perceptions suggest that AI agents may face an inherent credibility disadvantage compared to human agents.
Taken together, prior research on algorithm aversion, source credibility, and trust provides important but incomplete explanations for consumers’ differential responses to human versus AI recommendation agents. Algorithm aversion research documents systematic resistance to algorithmic advice, particularly in subjective or preference-based domains, yet offers limited insight into how such resistance may change across different user experiences. Credibility and trust research identifies evaluative mechanisms that shape acceptance of information sources, but largely treats AI agents as impersonal systems rather than targets of relational evaluation. As a result, existing frameworks struggle to explain why similar AI systems are sometimes resisted and sometimes accepted despite comparable performance characteristics [27,28].
These inconsistencies suggest that differential responses to AI recommendation agents cannot be fully understood by focusing on algorithmic performance or accuracy alone. Instead, they point to the importance of how users cognitively and relationally construe the recommendation source itself. Without accounting for such relational evaluations, prior frameworks remain limited in explaining when AI agents are discounted versus accepted. Consistent with this reasoning, recommendation agents identified as human are expected to be perceived as more credible than AI-based agents. Thus, the following hypothesis is proposed:
H1. 
Recommendation agents presented as human will be perceived as having higher credibility than recommendation agents presented as AI.

2.2. Credibility and Behavioral Intentions in Recommendation Contexts

Credibility has long been recognized as a central determinant of persuasion and behavioral influence. In marketing and communication research, source credibility shapes how information is processed, evaluated, and acted upon, influencing attitudes and intentions across a wide range of contexts [10,11]. In electronic commerce, credibility becomes particularly salient because consumers often rely on mediated information sources rather than direct experience when making decisions [29]. Although credibility is often discussed alongside trust, the two constructs are conceptually distinct. Credibility refers to evaluative judgments about a source’s perceived expertise and trustworthiness, whereas trust reflects a broader willingness to accept vulnerability based on those evaluations. In AI-mediated recommendation contexts, credibility functions as a more immediate assessment mechanism that precedes and informs trust-related responses [11,30,31].
Importantly, credibility is a multidimensional construct, and not all of its dimensions are equally salient in evaluations of AI recommendation agents. Prior research suggests that while algorithmic systems are often assumed to possess technical expertise or computational competence, they are more likely to be perceived as lacking trustworthiness, particularly in terms of benevolence and alignment with user interests. In contrast to human agents, AI recommendation agents are less readily associated with relational intent or concern for users’ welfare, which can undermine credibility evaluations even when expertise is not in question. This asymmetry implies that credibility deficits associated with AI agents are driven less by doubts about expertise and more by skepticism regarding trustworthiness, especially in subjective or preference-based decision contexts [11,28,31].
Within recommendation system research, perceived credibility has been shown to predict multiple downstream behavioral outcomes, including recommendation acceptance, continued system use, and purchase intention [13,14,15]. When users perceive a recommendation agent as credible, they are more likely to trust its suggestions, integrate them into their decision-making processes, and rely on the recommendation source in future interactions [32,33]. These effects are observed not only for human advisors but also for AI-driven agents, underscoring credibility’s central role in shaping user behavior.
Accordingly, in the context of recommendation agents, higher perceived credibility should lead to stronger behavioral intentions across multiple outcomes. Thus, the following hypothesis is proposed:
H2. 
Perceived credibility of a recommendation agent will have a positive effect on (a) intention to use the recommendation system, (b) recommendation acceptance, and (c) purchase intention.

2.3. Credibility as a Mediating Mechanism

Beyond its direct effects, credibility often functions as a key psychological mechanism through which source characteristics influence behavioral outcomes. Research on persuasion and decision support suggests that individuals rarely respond to source attributes directly; instead, these attributes shape evaluative judgments, such as credibility or trust, which then guide behavior [30,34]. This logic is particularly relevant in technology-mediated environments, where users must infer the qualities of an agent based on limited cues.
In human versus AI comparisons, differences in agent type are likely to manifest primarily through credibility-related evaluations rather than direct behavioral effects. Prior studies indicate that algorithmic sources influence behavior indirectly by shaping users’ beliefs about competence, reliability, and benevolence [35,36,37]. As such, credibility provides a theoretically grounded pathway linking agent type to behavioral intentions in recommendation contexts.
Based on this reasoning, the effect of recommendation agent type on behavioral intentions is expected to be transmitted through perceived credibility. Accordingly, the following hypothesis is proposed:
H3. 
The effect of recommendation agent type (human vs. AI) on (a) intention to use the recommendation system, (b) recommendation acceptance, and (c) purchase intention will be mediated by perceived credibility.

2.4. Psychological Ownership in AI Recommendation Agents

Psychological ownership refers to the state in which individuals feel as though a target object or entity is “theirs”, regardless of legal ownership [20,22]. Psychological ownership theory further suggests that when individuals perceive a target as “mine,” the target becomes incorporated into the self-concept, leading individuals to evaluate it through a more self-referential lens [20,38]. This self–object linkage increases feelings of responsibility, control, and personal relevance, which in turn shape evaluative judgments toward the owned target [22].
In digital and technological contexts, psychological ownership toward systems and interfaces can emerge through user-initiated actions such as personalization and customization, which involve exercising control and investing the self. These processes allow individuals to project their identity and preferences onto technological artifacts, thereby fostering a sense of ownership toward digital systems [21,23]. Prior research further indicates that psychological ownership experiences have meaningful consequences for users’ attitudes and behavioral intentions in AI-mediated and customization-based digital contexts, by shaping perceptions and evaluative responses toward AI-generated outputs [21,23,24].
Applying this perspective to AI recommendation agents, psychological ownership may play a critical role in shaping how consumers evaluate AI sources. When users feel a sense of ownership over a digital agent, they are more likely to perceive the agent as aligned with their preferences and intentions, reducing psychological distance and increasing perceived self–agent congruence. Such self-related evaluations tend to bias judgments in a favorable direction, leading individuals to ascribe greater competence, reliability, and benevolence to the target [20,38]. Consistent with this logic, research on algorithm aversion suggests that allowing users even minimal agency over algorithmic outputs can increase acceptance and trust in algorithmic systems [7,8].
While the preceding hypotheses establish credibility as a central mechanism linking recommendation agent type to behavioral intentions, they do not explain when or under what conditions the credibility disadvantage of AI recommendation agents may be attenuated. In particular, existing frameworks provide limited insight into how user–agent relationships shape credibility evaluations beyond impersonal or performance-based assessments. This limitation motivates the consideration of psychological ownership as a relational condition that can systematically alter how AI agents are evaluated [39].
Psychological ownership intervenes in the evaluation process through a combination of self-congruence, perceived control, and affective attachment rather than a single mechanism. First, ownership fosters self–agent congruence by integrating the AI agent into the user’s extended self, leading evaluations to be guided by self-referential judgments. Second, ownership enhances perceived control, as user-initiated customization and involvement increase the sense that the agent reflects one’s own intentions and preferences. Third, these cognitive appraisals are accompanied by affective attachment, which further biases evaluations in a favorable direction. Together, these interrelated mechanisms explain how psychological ownership reshapes credibility formation by transforming AI agents from impersonal systems into personally relevant entities, thereby attenuating credibility deficits associated with non-human sources [20,21,23,38].
Importantly, the effects of psychological ownership on credibility formation may not increase linearly as ownership intensifies. At high levels of psychological ownership, evaluations of AI recommendation agents may be subject to a ceiling effect or an over-attachment bias, whereby additional ownership cues no longer yield incremental credibility gains. In such cases, baseline AI agents may benefit from ownership-driven self-referential evaluations that reduce initial credibility deficits, while further customization provides diminishing marginal returns. Moreover, strong ownership may shift evaluative standards from source-based comparisons to self-extension–based judgments, thereby attenuating or even reversing relative differences between AI and human agents. This perspective anticipates the possibility that credibility-based indirect effects may weaken or change direction under conditions of high psychological ownership, providing a theoretical foundation for the conditional patterns examined in Study 2.
Taken together, these arguments suggest that psychological ownership can mitigate the credibility disadvantage typically associated with AI recommendation agents by reframing the agent as personally relevant and self-related rather than impersonal. Accordingly, psychological ownership is expected to function as a critical boundary condition in consumers’ evaluations of AI recommendation agents. Thus, the following hypothesis is proposed:
H4. 
Psychological ownership will moderate the effect of AI recommendation agents on perceived credibility, such that higher psychological ownership attenuates the negative effect of AI (vs. human) agents on credibility.

2.5. Moderated Mediation of Credibility

If psychological ownership moderates the relationship between AI recommendation agents and perceived credibility, it follows that the indirect effects of agent type on behavioral intentions through credibility should also be conditional on psychological ownership. In other words, the mediating role of credibility is expected to vary depending on the extent to which users experience a sense of ownership over the AI agent.
When psychological ownership is low, AI recommendation agents are likely to be evaluated as relatively impersonal, allowing credibility deficits associated with non-human sources to translate into weaker behavioral intentions. In contrast, as psychological ownership increases, credibility perceptions are shaped through a more self-referential evaluative process, attenuating the negative indirect effects of AI agents on downstream outcomes. Thus, the influence of agent type on behavioral intentions via credibility should differ systematically across levels of psychological ownership [40]. The proposed moderated mediation framework is summarized in Figure 1.
Accordingly, the following hypothesis is proposed:
H5. 
The indirect effect of recommendation agent type (human vs. AI) on (a) intention to use the recommendation system, (b) recommendation acceptance, and (c) purchase intention through perceived credibility will be moderated by psychological ownership, such that the indirect effect becomes weaker as psychological ownership increases.

3. Study 1

3.1. Participants and Procedure

Study 1 employed an online experiment using Prolific to examine the mediating role of credibility in the relationship between recommendation agent type and behavioral intentions. Participants were randomly assigned to one of three experimental conditions involving a simulated online shopping recommendation scenario: a human recommendation agent, an AI recommendation agent, or an AI recommendation agent with a naming task designed to induce psychological ownership. In the naming condition, participants were instructed to assign a name to the AI agent. No naming task was included in the other conditions. After completing the scenario, participants responded to a set of questionnaire items measuring credibility and behavioral intentions.
A total of 806 participants were initially recruited. To mitigate the risk of automated or low-quality responses, a bot-detection procedure based on reCAPTCHA scores was implemented. Following Qualtrics guidelines, responses with a reCAPTCHA score below 0.50, which indicates a higher likelihood of bot activity, were excluded from the sample [41]. This procedure resulted in the removal of 5 responses, yielding an interim sample of 801 participants.
Additional data quality screening procedures were applied. First, consistent with methodological research suggesting that unusually short response times may indicate insufficient engagement or careless responding, responses with particularly short completion times for the naming task were screened [42,43]. As a conservative screening criterion, participants whose completion times for the naming task fell within the lowest decile of the response time distribution were excluded from the analysis.
Second, responses containing clearly invalid names were removed. These included entries consisting of a single character, arbitrary strings of letters, or generic labels explicitly referring to artificial systems. This screening step was applied conservatively to exclude only responses that suggested minimal engagement with the naming task, thereby ensuring that the psychological ownership manipulation was meaningfully realized and preserving internal validity. After applying all data quality screening procedures, the final sample used for analysis consisted of 710 participants.

3.2. Measures

All items were measured on Likert-type scales (1 = strongly disagree, 7 = strongly agree). Credibility was measured using items capturing the trustworthiness and expertise dimensions of source credibility. To ensure contextual appropriateness for both AI and human agents, items with clear cross-agent applicability were selected. The final scale consisted of six items: “This recommendation agent seems trustworthy”, “this recommendation agent seems reliable”, “this recommendation agent seems dependable”, “this recommendation agent seems knowledgeable”, “this recommendation agent seems like an expert,” and “this recommendation agent seems skilled” [44] (α = 0.97).
Behavioral intention was conceptualized as a higher-order construct encompassing three related outcomes: intention to use the recommendation system, recommendation acceptance, and purchase intention. Intention to use the recommendation system was measured using three items adapted from [45] (α = 0.97), capturing participants’ intention to continue using the recommendation system in the future decision-making contexts: “I would be willing to use this recommendation system”, “I would use this recommendation system when I need recommendations”, and “I would be willing to continue using this recommendation system.”
Recommendation acceptance was measured using three items adapted from prior research on intention to follow advice and recommendations [14,46] (α = 0.95): “I would follow the agent’s recommendation”, “I would choose the option recommended by the agent,” and “I would rely on the agent’s recommendation when making my decision.”
Purchase intention was measured using two items adapted from [47] (α = 0.87), capturing participants’ likelihood of purchasing the recommended product: “I would purchase the recommended product” and “I would consider buying the recommended product.”

3.3. Results

Study 1 tested H1 to H3 using PROCESS Model 4 [48] with 10,000 bootstrap samples. PROCESS was used because the hypothesized model focused on estimating conditional indirect effects with observed variables, for which regression-based conditional process analysis is well suited and widely adopted in prior research. Income, gender, and education were included as covariates. The human recommendation agent condition served as the reference group. Two dummy variables were specified: X1 comparing the AI control condition with the human condition, and X2 comparing the AI naming condition with the human condition. A manipulation check indicated that participants in the AI naming condition reported higher perceived psychological ownership toward the AI agent than those in the standard AI condition, indicating that the naming task successfully induced a minimal ownership experience.

3.3.1. Effects of Agent Type on Credibility (H1)

Consistent with H1, recommendation agent type significantly influenced perceived credibility. Relative to the human agent, credibility was lower in the AI control condition (B = −0.9441, SE = 0.1021, t = −9.2450, p < 0.001, 95% CI [−1.1446, −0.7436]) and in the AI naming condition (B = −0.6868, SE = 0.1154, t = −5.9491, p < 0.001, 95% CI [−0.9134, −0.4601]). Thus, recommendation agents presented as human were perceived as more credible than AI agents, supporting H1.

3.3.2. Effects of Credibility on Behavioral Intentions (H2)

Supporting H2, perceived credibility positively predicted all three behavioral intention outcomes. Credibility was positively associated with (a) intention to use the recommendation system (B = 0.9463, SE = 0.0276, t = 34.2482, p < 0.001, 95% CI [0.8920, 1.0005]), (b) recommendation acceptance (B = 0.7598, SE = 0.0225, t = 33.7290, p < 0.001, 95% CI [0.7155, 0.8040]), and (c) purchase intention (B = 0.7264, SE = 0.0253, t = 28.7280, p < 0.001, 95% CI [0.6767, 0.7760]). Therefore, H2 was supported.

3.3.3. Mediation of Credibility (H3)

H3 predicted that credibility mediates the effect of recommendation agent type on behavioral intentions. Across all three dependent variables, the direct effects of agent type were not significant, whereas indirect effects through credibility were significant.
For intention to use, direct effects of agent type were not significant (X1: B = 0.1254, p = 0.114; X2: B = 0.1011, p = 0.244), while indirect effects through credibility were significant for both AI control versus human (indirect effect = −0.8934, BootSE = 0.1024, 95% CI [−1.0972, −0.6931]) and AI naming versus human (indirect effect = −0.6499, BootSE = 0.1094, 95% CI [−0.8701, −0.4378]).
For recommendation acceptance, direct effects were non-significant (X1: B = 0.0440, p = 0.497; X2: B = −0.0052, p = 0.941), whereas indirect effects through credibility were significant for AI control versus human (indirect effect = −0.7173, BootSE = 0.0774, 95% CI [−0.8689, −0.5634]) and AI naming versus human (indirect effect = −0.5218, BootSE = 0.0868, 95% CI [−0.6895, −0.3518]).
For purchase intention, direct effects were again non-significant (X1: B = 0.1137, p = 0.118; X2: B = 0.0438, p = 0.581), while indirect effects via credibility were significant for AI control versus human (indirect effect = −0.6858, BootSE = 0.0770, 95% CI [−0.8421, −0.5386]) and AI naming versus human (indirect effect = −0.4989, BootSE = 0.0842, 95% CI [−0.6674, −0.3373]).
Together, these findings indicate that recommendation agent type influenced behavioral intentions primarily through perceived credibility, supporting H3.

3.3.4. Comparison Between AI Conditions

Additional analyses compared the AI control condition and the AI naming condition. The naming condition produced higher perceived credibility than the AI control condition (B = 0.2617, SE = 0.1229, t = 2.1284, p = 0.0339, 95% CI [0.0200, 0.5033]). Direct effects on behavioral intentions were not significant, but indirect effects through credibility were significant for (a) intention to use (indirect effect = 0.2543, BootSE = 0.1173, 95% CI [0.0260, 0.4875]), (b) recommendation acceptance (indirect effect = 0.1949, BootSE = 0.0885, 95% CI [0.0215, 0.3669]), and (c) purchase intention (indirect effect = 0.1870, BootSE = 0.0858, 95% CI [0.0169, 0.3524]). This suggests that introducing a naming task enhanced behavioral intentions indirectly by increasing perceived credibility, although direct differences between the two AI conditions were not significant.

3.4. Discussion

Study 1 provides clear evidence that recommendation agents presented as human are perceived as more credible than AI agents, and that this credibility difference systematically translates into downstream behavioral intentions. Across all three outcomes, credibility fully mediated the effect of agent type, indicating that lower willingness to use, accept, or purchase based on AI recommendations arises primarily from reduced credibility perceptions rather than from direct resistance to AI per se. At the same time, the comparison between the two AI conditions shows that introducing a simple user-initiated interaction, namely naming the AI agent, increases perceived credibility and indirectly improves behavioral intentions. These findings suggest that the credibility disadvantage of AI recommendation agents is not fixed, but can be shaped by design features that encourage users to engage with the agent in a more self-related manner. Building on this insight, Study 2 examines whether psychological ownership systematically moderates credibility formation and the resulting indirect effects of agent type on behavioral intentions.

4. Study 2

4.1. Participants and Procedure

Study 2 was designed to examine the moderating role of psychological ownership in the relationship between recommendation agent type, credibility, and behavioral intentions. As in Study 1, participants were recruited via Prolific and took part in an online experiment involving a simulated online shopping recommendation scenario. Participants were randomly assigned to one of three experimental conditions: a human recommendation agent, a standard AI recommendation agent, or a customizable AI recommendation agent designed to induce stronger psychological ownership.
In the customizable AI condition, participants were informed that they could customize the recommendation agent in ways similar to existing AI character-based conversational platforms, where users create customized AI agents for interactive dialogue. Specifically, they were instructed to describe (i) the desired appearance of the AI agent, (ii) the agent’s personality, tone, and conversational style, and (iii) a name for the agent. This customization procedure was designed to induce a stronger form of psychological ownership than the naming task used in Study 1. Whereas assigning a name involves limited user input and self-investment, the customization task required participants to actively define multiple attributes of the AI agent, thereby increasing perceived control and self-investment. This distinction allows the two studies to operationalize different intensities of psychological ownership while maintaining conceptual consistency across designs.
Participants were told that their descriptions would be used to generate a personalized AI recommendation agent. In the other two conditions, no customization task was included. After exposure to the recommendation scenario, participants completed questionnaire measures assessing credibility and behavioral intentions. Psychological ownership was measured only for participants in the AI conditions, given that ownership toward a human recommendation agent is conceptually unnatural.
Psychological ownership was not measured in the human-agent condition by design. Psychological ownership is conceptually grounded in perceptions of control, self-investment, and self–object linkage toward a target that is subject to user appropriation. While such processes are theoretically meaningful for AI recommendation agents that users can name, customize, or shape, they are conceptually less appropriate for human recommendation agents, for which feelings of ownership would be unnatural and difficult to interpret. Importantly, the absence of psychological ownership measurement in the human condition does not affect the comparability of experimental conditions, as psychological ownership is not modeled as a mediator or antecedent across all agent types, but as a boundary condition that qualifies credibility formation within AI contexts. Accordingly, ownership is theorized and tested only where it is conceptually valid, preserving internal validity while avoiding construct misuse.
A total of 802 participants were initially recruited. To mitigate automated or low-quality responses, a bot-detection procedure based on reCAPTCHA scores was applied. Following Qualtrics guidelines, responses with a reCAPTCHA score below 0.50 were excluded [41], resulting in the removal of three cases and an interim sample of 799 participants.
Additional data quality screening procedures were then conducted following the same criteria used in Study 1. First, completion times for the customization task were screened, and participants whose completion times fell within the lowest decile of the response time distribution were excluded to reduce the likelihood of insufficient task engagement [42,43]. Second, responses containing clearly non-meaningful or invalid names were removed, including single-character entries, arbitrary strings of letters, or generic labels referring to artificial systems. These screening procedures were applied conservatively to ensure meaningful engagement with the customization task while avoiding overly restrictive exclusion criteria. After all screening procedures, the final sample for Study 2 consisted of 766 participants.

4.2. Measures

All items were measured on Likert-type scales (1 = strongly disagree, 7 = strongly agree). To ensure consistency across studies, credibility and behavioral intention measures in Study 2 followed the same items and procedures used in Study 1. Behavioral intention was operationalized using three outcome variables: (a) intention to use the recommendation system, (b) recommendation acceptance, and (c) purchase intention. Psychological ownership toward the recommendation agent was measured only in the AI conditions using three items adapted from [49] (α = 0.99): “I really feel like I own this agent,” “I really feel like this agent belongs to me,” and “I really feel like this agent is mine.” Although the internal consistency of the psychological ownership scale was very high, this level of reliability is consistent with prior research using short, conceptually homogeneous ownership measures. The three items capture closely related facets of perceived ownership and self–object linkage, which are expected to co-vary strongly by design. Given the theoretical focus on ownership strength rather than dimensional differentiation, the scale was retained to ensure conceptual clarity and measurement precision rather than item reduction.

4.3. Results

Overall, Study 2 replicates the credibility-based mediation observed in Study 1 and further demonstrates that psychological ownership systematically moderates both credibility formation and its indirect effects on behavioral intentions. In particular, higher psychological ownership attenuates and ultimately reverses the relative credibility advantage of customized AI agents over baseline AI agents. A manipulation check confirmed that participants in the customization condition reported significantly higher psychological ownership toward the AI agent than those in the standard AI condition, indicating that the ownership manipulation was effective.
Consistent with Study 1, recommendation agent type significantly influenced perceived credibility, and credibility in turn positively predicted all behavioral intention outcomes. Baseline AI agents were evaluated as less credible than human agents, whereas customized AI agents exhibited substantially higher credibility. Credibility remained a strong positive predictor of intention to use the recommendation system, recommendation acceptance, and purchase intention. Importantly, credibility again served as a mediating mechanism linking recommendation agent type to behavioral intentions. These patterns replicate the mediation structure established in Study 1, confirming the robustness of H1–H3.

4.3.1. Moderating Effect of Psychological Ownership on Credibility (H4)

Results show that psychological ownership significantly conditions how AI agent type influences perceived credibility. While customized AI agents are evaluated as more credible than baseline AI agents at low to moderate levels of ownership, this difference diminishes and becomes non-significant at high levels of ownership.
To test H4, PROCESS Model 1 [48] was employed with 10,000 bootstrap samples to examine whether psychological ownership moderated the effect of AI agent type on perceived credibility. Analyses focused on the AI conditions only, comparing the baseline AI agent and customized AI agent. The interaction between agent type and psychological ownership on credibility was significant (interaction term: b = −0.2330, SE = 0.0712, t = −3.27, p = 0.001). The change in explained variance due to the interaction was also significant (ΔR2 = 0.0160, F(1,475) = 10.70, p = 0.001), indicating that psychological ownership systematically altered how AI agents were evaluated in terms of credibility.
Conditional effects showed that when psychological ownership was low, customized AI agents were evaluated as significantly more credible than baseline AI agents (b = 0.6663, SE = 0.2054, p = 0.001). This difference remained significant at moderate levels of psychological ownership (b = 0.4333, SE = 0.1643, p = 0.009). However, at high levels of psychological ownership, the difference in credibility between customized and baseline AI agents became non-significant at high levels of psychological ownership (b = −0.4771, SE = 0.2545, p = 0.061), indicating that psychological ownership attenuates the credibility disadvantage of baseline AI agents.

4.3.2. Moderated Mediation Through Credibility (H5)

Consistent with the proposed moderated mediation framework, the indirect effects of AI agent type on behavioral intentions through credibility vary systematically across levels of psychological ownership. Specifically, the credibility-based advantage of customized AI agents is positive at low and moderate ownership levels but reverses at high ownership levels, indicating a conditional change in the direction of the indirect effect.
Next, PROCESS Model 7 [48] was used with 10,000 bootstrap samples to examine whether the indirect effect of recommendation agent type on behavioral intentions through credibility varied across levels of psychological ownership.
For intention to use the recommendation system, the indirect effect of AI agent type through credibility was positive and significant when psychological ownership was low (effect = 0.6246, BootSE = 0.2314, BootCI [0.1641, 1.0746]) and moderate (effect = 0.4061, BootSE = 0.1797, BootCI [0.0530, 0.7576]). At high psychological ownership, however, the indirect effect became negative (effect = −0.4472, BootSE = 0.1662, BootCI [−0.7890, −0.1305]). The index of moderated mediation was significant (index = −0.2184, BootSE = 0.0640, BootCI [−0.3464, −0.0937]).
A similar pattern emerged for recommendation acceptance. The conditional indirect effect through credibility was positive at low (effect = 0.4744, BootCI [0.1238, 0.8139]) and moderate psychological ownership (effect = 0.3085, BootCI [0.0334, 0.5714]), but negative at high psychological ownership (effect = −0.3397, BootCI [−0.5964, −0.1047]). The index of moderated mediation was significant (index = −0.1659, BootCI [−0.2637, −0.0706]).
For purchase intention, the indirect effect through credibility was again positive at low (effect = 0.4487, BootCI [0.1194, 0.7877]) and moderate psychological ownership (effect = 0.2918, BootCI [0.0362, 0.5560]), but negative at high psychological ownership (effect = −0.3213, BootCI [−0.5682, −0.0939]). The index of moderated mediation was significant (index = −0.1569, BootCI [−0.2495, −0.0672]).

4.4. Discussion

Taken together, these findings indicate that the mediating role of credibility is contingent on psychological ownership. When psychological ownership is low, AI recommendation agents exhibit a negative credibility-based pathway consistent with algorithm aversion. As psychological ownership increases, this indirect effect attenuates and ultimately reverses, such that customized AI agents no longer suffer a credibility disadvantage and instead generate stronger downstream behavioral intentions. These results provide strong support for H5.
An important nuance in the results of Study 2 is that the conditional indirect effects through credibility do not merely weaken but change direction at high levels of psychological ownership. This pattern suggests that strong ownership alters the evaluative reference point through which AI agents are judged. When psychological ownership is high, AI agents may no longer be evaluated as external or impersonal sources but as self-related entities, which attenuates credibility penalties typically associated with algorithmic sources. Under such conditions, additional customization loses its incremental credibility advantage, not because credibility declines, but because ownership-driven self-extension reduces perceived distance between the user and the AI agent. As a result, credibility evaluations converge or reverse across AI agent types, producing a change in the direction of the indirect effect.
These findings also speak to the broader debate between algorithm aversion and algorithm appreciation. While prior research often contrasts these perspectives as competing explanations, the present results suggest that they are conditional rather than mutually exclusive. In contexts characterized by low psychological ownership, evaluations of AI recommendation agents align with algorithm aversion, manifesting as lower credibility and weaker downstream responses. However, as psychological ownership increases, this aversion diminishes and can even reverse, producing patterns consistent with algorithm appreciation. This conditional shift indicates that performance improvements or technical sophistication alone are insufficient to resolve resistance to AI agents; instead, relational design features that foster ownership fundamentally alter the evaluative frame through which AI agents are judged.
Importantly, these findings also challenge a dominant assumption in algorithm aversion research, namely that resistance toward AI agents can be resolved primarily through improvements in performance, accuracy, or explainability. The present results indicate that even when such technological attributes are held constant, user evaluations remain sensitive to how the AI agent is cognitively and relationally construed. Psychological ownership alters the evaluative frame itself, allowing AI agents to be judged not merely as external decision aids but as self-related entities. This perspective helps explain why prior studies report mixed evidence regarding algorithmic resistance despite comparable system performance.
At the same time, alternative explanations warrant consideration. One possibility is that high psychological ownership induces a self-extension bias, whereby evaluations of the AI agent become more favorable because the agent is perceived as an extension of the self rather than because of changes in perceived competence. Another possibility is that strong ownership reduces the incremental benefit of customization cues by saturating evaluative judgments, thereby diminishing relative differences between AI agent types. While the current findings are consistent with a credibility-based relational mechanism, future research using longitudinal designs or behavioral usage data would be valuable for disentangling these alternative explanations.

5. General Discussion

Study 2 provides robust evidence that psychological ownership serves as a critical boundary condition shaping consumer evaluations of AI recommendation agents. Replicating Study 1, baseline AI agents were perceived as less credible than human agents, and credibility consistently predicted all three behavioral intention outcomes. However, when participants were allowed to actively customize the AI agent, perceived credibility increased substantially, in some cases exceeding evaluations of the human agent. This pattern suggests that user-initiated customization alters how AI agents are cognitively and affectively construed, shifting them from impersonal algorithmic systems toward self-related interactive partners.
More importantly, conditional process analyses demonstrated that psychological ownership systematically moderated both the effect of agent type on credibility and the indirect effect of agent type on behavioral intentions through credibility. When psychological ownership was low, AI agents exhibited a credibility disadvantage that translated into weaker behavioral intentions through reduced trust in the recommendation source. As psychological ownership increased, this indirect disadvantage attenuated and shifted in direction, indicating that high ownership reframes AI agents as aligned with users’ preferences and identity. This finding extends prior algorithm aversion research by showing that agency over the AI agent itself, rather than mere control over algorithmic outputs, can reshape credibility perceptions and downstream decision outcomes.
Taken together, the findings demonstrate that credibility is not a fixed attribute of AI recommendation agents but is dynamically shaped by user experience design features that cultivate psychological ownership. This provides a theoretically grounded explanation for why AI agents sometimes suffer from credibility deficits and identifies a concrete pathway through which such deficits can be mitigated through user-centered AI design.

5.1. Implications for Theory and Practice

From a theoretical perspective, this research contributes to emerging literature on human–algorithm interaction by demonstrating that credibility perceptions toward AI agents are not solely determined by algorithmic performance or anthropomorphic cues, but can be systematically reshaped through psychological ownership. Prior work has documented algorithm aversion and trust deficits toward AI decision systems [6,9], as well as the importance of source credibility in persuasion and recommendation contexts [44,50]. The present findings integrate these streams by showing that user-initiated customization of AI agents cultivates psychological ownership, which in turn moderates credibility formation and downstream behavioral intentions. In doing so, this research extends psychological ownership theory into AI-mediated service encounters, where ownership of an interactive agent, rather than ownership of a physical or digital object, becomes a key mechanism shaping trust and decision behavior [49,51].
From a practical perspective, the findings suggest that AI-enabled recommendation systems should move beyond static, firm-designed agents toward interfaces that allow users to actively shape their AI counterparts. Current AI platforms increasingly offer agent personalization and avatar creation features, yet such design choices are often treated as aesthetic add-ons rather than trust-building mechanisms. The present results indicate that customization features that allow users to define an agent’s identity, style, or persona can meaningfully improve credibility perceptions and increase acceptance of AI recommendations. This aligns with growing evidence that user agency and co-creation enhance engagement and trust in digital environments [52,53], and with recent calls for human-centered AI design that prioritizes user control and interpretability [54]. For firms deploying AI recommendation agents, investing in user-driven agent customization may therefore represent a viable strategy for overcoming credibility barriers and improving long-term adoption.
In product design terms, customization in AI recommendation agents can take multiple concrete forms. These include allowing users to select or define an agent’s persona, tone, and conversational style, assigning a name or visual identity, and adjusting interaction preferences that shape how recommendations are framed and delivered. Such features move beyond surface-level aesthetics by enabling users to exercise meaningful agency over the agent’s identity and behavior. Importantly, the present findings suggest that these ownership-oriented customization elements function as credibility-shaping mechanisms, rather than as purely decorative interface options, with direct implications for user trust and adoption in e-commerce platforms.

5.2. Limitations and Future Research

Several limitations should be acknowledged. First, the studies relied on scenario-based experimental designs using online samples, which may not fully capture long-term interactions with AI agents in naturalistic service environments. While the present research employed controlled experimental designs that are well suited for identifying causal mechanisms, future research should examine whether psychological ownership effects persist over repeated real-world interactions with deployed AI systems. Both studies relied on single-session exposure to recommendation agents. Yet prior research suggests that trust and reliance in technological systems may vary over time as users accumulate interaction experience rather than remaining static after initial exposure [55]. Recent longitudinal evidence further demonstrates that trust in personalized intelligent agents develops dynamically through repeated real-world interactions [56]. Accordingly, future research should examine whether the psychological ownership–credibility mechanism identified in this research persists, strengthens, or decays across extended usage periods and naturalistic deployment contexts. Such longitudinal designs would offer deeper insight into how ownership-based design features shape enduring consumer–AI relationships rather than short-term experimental responses.
Second, psychological ownership was induced through structured customization tasks within an experimental interface. Although this approach enhances internal validity, real-world AI systems may afford varying degrees of customization, autonomy, and user control, which can shape user engagement and trust formation differently [53,54]. Future studies should therefore investigate boundary conditions under which ownership-based interventions remain effective, such as differences in platform design, task domain, levels of algorithmic transparency, and individual differences in AI literacy, all of which are known to shape users’ perceptions of control, understanding of AI systems, and trust calibration toward algorithmic agents [54,57,58].
Third, the current studies focused on recommendation contexts; extending the framework to other AI-mediated service encounters such as virtual assistants, automated customer service agents, or generative content platforms would further establish the generalizability of the proposed credibility formation mechanism [59,60]. In such settings, users often engage in repeated and relational interactions with AI agents, which may strengthen or reshape psychological ownership dynamics over time. Examining longitudinal ownership development in these environments would provide deeper insight into how credibility and trust toward AI systems evolve beyond initial exposure.
Finally, because key constructs in the present research were measured using self-reported Likert-type scales within a single survey session, the potential for common method bias cannot be fully ruled out. Although the use of experimental manipulations helps mitigate this concern, future research could further address common method issues by incorporating multi-source data, behavioral usage measures, or temporally separated measurement designs. Moreover, extending this framework to field settings, such as live e-commerce platforms or A/B testing environments, would allow for examination of ownership-driven credibility dynamics under naturalistic usage conditions.

6. Conclusions

This research investigated how consumers evaluate AI recommendation agents and identified credibility as a central psychological mechanism linking agent type to behavioral intentions. Across two controlled experiments, the findings consistently showed that AI recommendation agents suffer an initial credibility disadvantage relative to human agents, which in turn reduces intention to use the system, recommendation acceptance, and purchase intention. These results establish credibility as a robust mediator in AI-enabled recommendation contexts.
Crucially, this research demonstrates that credibility perceptions of AI agents are not fixed. When users were given opportunities to personalize and customize an AI agent, psychological ownership emerged as a powerful boundary condition that reshaped source evaluations. Psychological ownership reduced the credibility disadvantage of AI agents, and under strong ownership conditions, AI agents were even perceived as more credible than human agents. Through this mechanism, AI agents transitioned from being perceived as impersonal algorithmic tools to self-related interactive partners, thereby altering downstream decision outcomes.
By integrating recommendation agent research, credibility theory, and psychological ownership theory, this work advances understanding of how human–AI interactions in electronic commerce environments can be designed to foster trust and sustained engagement. Rather than treating credibility deficits of AI agents as inevitable, the findings show that user experience design features that cultivate ownership offer a viable pathway for overcoming algorithmic skepticism. As AI agents become increasingly embedded in digital marketplaces, understanding how to align system design with fundamental psychological processes will be essential for realizing their full commercial and societal potential. Overall, this research reframes AI recommendation agent credibility as a relational and design-contingent judgment rather than a technologically fixed attribute, demonstrating that user–agent relationships fundamentally shape how AI systems are evaluated and adopted. From a broader perspective, these findings underscore the importance of platform governance and ethical AI design approaches that emphasize user agency, transparency of role boundaries, and responsible customization practices when deploying AI recommendation agents at scale.
From an electronic commerce perspective, these findings underscore that the credibility of AI recommendation agents is not a fixed technological property, but a design-contingent outcome shaped by user–agent relationships. Platforms that rely solely on improving algorithmic accuracy or transparency may fail to address persistent psychological resistance toward AI agents. In contrast, incorporating customization features that allow users to shape an agent’s identity, tone, or persona can foster psychological ownership, thereby mitigating credibility deficits and strengthening downstream behavioral intentions. This highlights a concrete design pathway through which e-commerce platforms can move beyond efficiency gains toward sustained user acceptance and engagement with AI-driven recommendation systems.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Hankuk University of Foreign Studies (protocol code HUFS-2507-006; 2 July 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy and ethical restrictions.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Jannach, D. Recommender Systems: An Introduction; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar] [CrossRef]
  2. Ricci, F.; Rokach, L.; Shapira, B. Recommender Systems: Techniques, Applications, and Challenges. In Recommender Systems Handbook; Ricci, F., Rokach, L., Shapira, B., Eds.; Springer: New York, NY, USA, 2022; pp. 1–35. [Google Scholar] [CrossRef]
  3. Adomavicius, G.; Tuzhilin, A. Toward the next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions. IEEE Trans. Knowl. Data Eng. 2005, 17, 734–749. [Google Scholar] [CrossRef]
  4. Xiao, B.; Benbasat, I. E-Commerce Product Recommendation Agents: Use, Characteristics, and Impact. MIS Q. 2007, 31, 137–209. [Google Scholar] [CrossRef]
  5. Maedche, A.; Legner, C.; Benlian, A.; Berger, B.; Gimpel, H.; Hess, T.; Hinz, O.; Morana, S.; Söllner, M. AI-Based Digital Assistants. Bus. Inf. Syst. Eng. 2019, 61, 535–544. [Google Scholar] [CrossRef]
  6. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef]
  7. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Manag. Sci. 2018, 64, 1155–1170. [Google Scholar] [CrossRef]
  8. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  9. Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-Dependent Algorithm Aversion. J. Mark. Res. 2019, 56, 809–825. [Google Scholar] [CrossRef]
  10. Pornpitakpan, C. The Persuasiveness of Source Credibility: A Critical Review of Five Decades’ Evidence. J. Appl. Soc. Psychol. 2004, 34, 243–281. [Google Scholar] [CrossRef]
  11. Metzger, M.J.; Flanagin, A.J.; Medders, R.B. Social and Heuristic Approaches to Credibility Evaluation Online. J. Commun. 2010, 60, 413–439. [Google Scholar] [CrossRef]
  12. Wang, W.; Benbasat, I. Recommendation Agents for Electronic Commerce: Effects of Explanation Facilities on Trusting Beliefs. J. Manag. Inf. Syst. 2007, 23, 217–246. [Google Scholar] [CrossRef]
  13. Komiak, S.Y.X.; Benbasat, I. The Effects of Personalization and Familiarity on Trust and Adoption of Recommendation Agents. MIS Q. 2006, 30, 941–960. [Google Scholar] [CrossRef]
  14. Harrison McKnight, D.; Choudhury, V.; Kacmar, C. The Impact of Initial Consumer Trust on Intentions to Transact with a Web Site: A Trust Building Model. J. Strateg. Inf. Syst. 2002, 11, 297–323. [Google Scholar] [CrossRef]
  15. Dabholkar, P.A.; Sheng, X. Consumer Participation in Using Online Recommendation Agents: Effects on Satisfaction, Trust, and Purchase Intentions. Serv. Ind. J. 2012, 32, 1433–1449. [Google Scholar] [CrossRef]
  16. Langer, M.; Baum, K.; König, C.J.; Hähne, V.; Oster, D.; Speith, T. Spare Me the Details: How the Type of Information about Automated Interviews Influences Applicant Reactions. Int. J. Sel. Assess. 2021, 29, 154–169. [Google Scholar] [CrossRef]
  17. Shin, D. User Perceptions of Algorithmic Decisions in the Personalized AI System: Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability. J. Broadcast. Electron. Media 2020, 64, 541–565. [Google Scholar] [CrossRef]
  18. Kizilcec, R.F. How Much Information? Effects of Transparency on Trust in an Algorithmic Interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems; CHI ’16; Association for Computing Machinery: New York, NY, USA, 2016; pp. 2390–2395. [Google Scholar] [CrossRef]
  19. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; KDD ’16; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
  20. Pierce, J.L.; Kostova, T.; Dirks, K.T. Toward a Theory of Psychological Ownership in Organizations. Acad. Manage. Rev. 2001, 26, 298–310. [Google Scholar] [CrossRef]
  21. Kirk, C.P.; Swain, S.D.; Gaskin, J.E. I’m Proud of It: Consumer Technology Appropriation and Psychological Ownership. J. Mark. Theory Pract. 2015, 23, 166–184. [Google Scholar] [CrossRef]
  22. Pierce, J.L.; Jussila, I.; Cummings, A. Psychological Ownership within the Job Design Context: Revision of the Job Characteristics Model. J. Organ. Behav. 2009, 30, 477–496. [Google Scholar] [CrossRef]
  23. Kirk, C.P.; Swain, S.D. Consumer Psychological Ownership of Digital Technology. In Psychological Ownership and Consumer Behavior; Springer: Cham, Switzerland, 2018; pp. 69–90. [Google Scholar] [CrossRef]
  24. Yang, J.; Maeng, A.; Jung, S.-U. The Impact of AI-Powered Ad Customization: Exploring the Impact of Engagement, Psychological Ownership, Responsibility, and Attitude on Behavioral Intentions. J. Advert. 2025, 1–24. [Google Scholar] [CrossRef]
  25. Longoni, C.; Bonezzi, A.; Morewedge, C.K. Resistance to Medical Artificial Intelligence. J. Consum. Res. 2019, 46, 629–650. [Google Scholar] [CrossRef]
  26. Novozhilova, E.; Mays, K.; Paik, S.; Katz, J.E. More Capable, Less Benevolent: Trust Perceptions of AI Systems across Societal Contexts. Mach. Learn. Knowl. Extr. 2024, 6, 342–366. [Google Scholar] [CrossRef]
  27. Araujo, T. Living up to the Chatbot Hype: The Influence of Anthropomorphic Design Cues and Communicative Agency Framing on Conversational Agent and Company Perceptions. Comput. Hum. Behav. 2018, 85, 183–189. [Google Scholar] [CrossRef]
  28. Luo, X.; Tong, S.; Fang, Z.; Qu, Z. Frontiers: Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases. Mark. Sci. 2019, 38, 937–947. [Google Scholar] [CrossRef]
  29. Flanagin, A.J.; Metzger, M.J. The Role of Site Features, User Attributes, and Information Verification Behaviors on the Perceived Credibility of Web-Based Information. New Media Soc. 2007, 9, 319–342. [Google Scholar] [CrossRef]
  30. Flanagin, A.J.; Metzger, M.J. Trusting Expert- versus User-Generated Ratings Online: The Role of Information Volume, Valence, and Consumer Characteristics. Comput. Hum. Behav. 2013, 29, 1626–1634. [Google Scholar] [CrossRef]
  31. Gefen, D.; Karahanna, E.; Straub, D. Trust and TAM in Online Shopping: An Integrated Model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  32. Acharya, N.; Sassenberg, A.-M.; Soar, J. Consumers’ Behavioural Intentions to Reuse Recommender Systems: Assessing the Effects of Trust Propensity, Trusting Beliefs and Perceived Usefulness. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 55–78. [Google Scholar] [CrossRef]
  33. Shaker, A.K.; Mostafa, R.H.A.; Elseidi, R.I. Predicting Intention to Follow Online Restaurant Community Advice: A Trust-Integrated Technology Acceptance Model. Eur. J. Manag. Bus. Econ. 2021, 32, 185–202. [Google Scholar] [CrossRef]
  34. Sundar, S.S. The MAIN Model: A Heuristic Approach to Understanding Technology Effects on Credibility; MacArthur Foundation Digital Media and Learning Initiative: Cambridge, MA, USA, 2008. [Google Scholar]
  35. Park, K.; Yoon, H.Y. Beyond the Code: The Impact of AI Algorithm Transparency Signaling on User Trust and Relational Satisfaction. Public Relat. Rev. 2024, 50, 102507. [Google Scholar] [CrossRef]
  36. Zhang, Y.; Liao, Q.V.; Bellamy, R.K. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery: New York, NY, USA, 2020; pp. 295–305. [Google Scholar] [CrossRef]
  37. Luo, Y.; Zhao, Z.; Xu, X.; Zhao, Y.; Yang, F. The Influence of Recommendation Algorithms on Users’ Intention to Adopt Health Information: Does Trust Belief Play a Role? J. Am. Med. Inform. Assoc. 2025, 32, 1415–1424. [Google Scholar] [CrossRef]
  38. Belk, R.W. Possessions and the Extended Self. J. Consum. Res. 1988, 15, 139–168. [Google Scholar] [CrossRef]
  39. Aguinis, H.; Edwards, J.R.; Bradley, K.J. Improving Our Understanding of Moderation and Mediation in Strategic Management Research. Organ. Res. Methods 2017, 20, 665–685. [Google Scholar] [CrossRef]
  40. Preacher, K.J.; Rucker, D.D.; Hayes, A.F. Addressing Moderated Mediation Hypotheses: Theory, Methods, and Prescriptions. Multivar. Behav. Res. 2007, 42, 185–227. [Google Scholar] [CrossRef]
  41. Response Quality. Available online: https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/response-quality/ (accessed on 1 October 2025).
  42. Wise, S.L.; Ma, L. Setting Response Time Thresholds for a CAT Item Pool: The Normative Threshold Method. In Proceedings of the Annual Meeting of the National Council on Measurement in Education, Vancouver, BC, Canada, 14–16 April 2012. [Google Scholar]
  43. Greszki, R.; Meyer, M.; Schoen, H. Exploring the Effects of Removing “Too Fast” Responses and Respondents from Web Surveys. Public Opin. Q. 2015, 79, 471–503. [Google Scholar] [CrossRef]
  44. Ohanian, R. Construction and Validation of a Scale to Measure Celebrity Endorsers’ Perceived Expertise, Trustworthiness, and Attractiveness. J. Advert. 1990, 19, 39–52. [Google Scholar] [CrossRef]
  45. Bhattacherjee, A. Understanding Information Systems Continuance: An Expectation-Confirmation Model. MIS Q. 2001, 25, 351–370. [Google Scholar] [CrossRef]
  46. Casaló, L.V.; Flavián, C.; Guinalíu, M. Understanding the Intention to Follow the Advice Obtained in an Online Travel Community. Comput. Hum. Behav. 2011, 27, 622–633. [Google Scholar] [CrossRef]
  47. Napoli, J.; Dickinson, S.J.; Beverland, M.B.; Farrelly, F. Measuring Consumer-Based Brand Authenticity. J. Bus. Res. 2014, 67, 1090–1098. [Google Scholar] [CrossRef]
  48. Hayes, A.F. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach, 3rd ed.; Methodology in the Social Sciences; The Guilford Press: New York, NY, USA; London, UK, 2022. [Google Scholar]
  49. Peck, J.; Shu, S.B. The Effect of Mere Touch on Perceived Ownership. J. Consum. Res. 2009, 36, 434–447. [Google Scholar] [CrossRef]
  50. Hovland, C.I.; Weiss, W. The Influence of Source Credibility on Communication Effectiveness. Public Opin. Q. 1951, 15, 635–650. [Google Scholar] [CrossRef]
  51. Pierce, J.L.; Kostova, T.; Dirks, K.T. The State of Psychological Ownership: Integrating and Extending a Century of Research. Rev. Gen. Psychol. 2003, 7, 84–107. [Google Scholar] [CrossRef]
  52. Hollebeek, L.D.; Glynn, M.S.; Brodie, R.J. Consumer Brand Engagement in Social Media: Conceptualization, Scale Development and Validation. J. Interact. Mark. 2014, 28, 149–165. [Google Scholar] [CrossRef]
  53. Huang, M.-H.; Rust, R.T. A Strategic Framework for Artificial Intelligence in Marketing. J. Acad. Mark. Sci. 2021, 49, 30–50. [Google Scholar] [CrossRef]
  54. Shneiderman, B. Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. Int. J. Hum. Comput. Interact. 2020, 36, 495–504. [Google Scholar] [CrossRef]
  55. Mcknight, D.H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a Specific Technology: An Investigation of Its Components and Measures. ACM Trans. Manag. Inf. Syst. 2011, 2, 12. [Google Scholar] [CrossRef]
  56. Zafari, S.; de Pagter, J.; Papagni, G.; Rosenstein, A.; Filzmoser, M.; Koeszegi, S.T. Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System. Multimodal Technol. Interact. 2024, 8, 20. [Google Scholar] [CrossRef]
  57. Parasuraman, A. Technology Readiness Index (Tri): A Multiple-Item Scale to Measure Readiness to Embrace New Technologies. J. Serv. Res. 2000, 2, 307–320. [Google Scholar] [CrossRef]
  58. Pinski, M.; Hofmann, T.; Benlian, A. AI Literacy for the Top Management: An Upper Echelons Perspective on Corporate AI Orientation and Implementation Ability. Electron. Mark. 2024, 34, 24. [Google Scholar] [CrossRef]
  59. Glikson, E.; Woolley, A.W. Human Trust in Artificial Intelligence: Review of Empirical Research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  60. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
Figure 1. Conceptual Model.
Figure 1. Conceptual Model.
Jtaer 21 00063 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, J. Psychological Ownership as a Boundary Condition in AI Recommendation: Credibility Formation and Behavioral Intentions in Electronic Commerce. J. Theor. Appl. Electron. Commer. Res. 2026, 21, 63. https://doi.org/10.3390/jtaer21020063

AMA Style

Yang J. Psychological Ownership as a Boundary Condition in AI Recommendation: Credibility Formation and Behavioral Intentions in Electronic Commerce. Journal of Theoretical and Applied Electronic Commerce Research. 2026; 21(2):63. https://doi.org/10.3390/jtaer21020063

Chicago/Turabian Style

Yang, John. 2026. "Psychological Ownership as a Boundary Condition in AI Recommendation: Credibility Formation and Behavioral Intentions in Electronic Commerce" Journal of Theoretical and Applied Electronic Commerce Research 21, no. 2: 63. https://doi.org/10.3390/jtaer21020063

APA Style

Yang, J. (2026). Psychological Ownership as a Boundary Condition in AI Recommendation: Credibility Formation and Behavioral Intentions in Electronic Commerce. Journal of Theoretical and Applied Electronic Commerce Research, 21(2), 63. https://doi.org/10.3390/jtaer21020063

Article Metrics

Back to TopTop