2.1. AI-Driven Personalization in E-Commerce
AI-driven personalization in e-commerce refers to the use of advanced artificial intelligence algorithms to tailor content, product recommendations, pricing, and user interfaces to individual consumers based on their behavioral data, preferences, and interactions [
2,
3,
10]. Fundamentally, this process employs mechanisms such as machine learning, recommender systems, and predictive analytics to examine extensive datasets, including browsing history, purchase patterns, and real-time engagement, to deliver highly relevant experiences [
3,
20]. Recommender systems, a fundamental component of this methodology, utilize techniques such as collaborative filtering, which aligns users with similar profiles, content-based filtering, which proposes items similar to previous preferences, and hybrid models that integrate both methods to enhance accuracy [
10,
21]. Machine learning facilitates continuous adaptation through techniques such as clustering, classification, and deep neural networks. Concurrently, predictive analytics is employed to forecast future behaviors, thereby enabling the preemptive customization of offerings, including dynamic pricing and personalized search results [
2,
4]. These technologies have transitioned from rule-based segmentation to data-driven, real-time personalization, thereby enhancing platforms such as Amazon’s recommendation engine, which substantially increases conversion rates [
3,
22].
The rapid ascent of personalization within digital commerce on a global scale has been remarkable, driven by substantial expansion in e-commerce. Companies such as Amazon, Netflix, and Alibaba are leading the way. They use AI helpers to reach millions of users and keep them interested [
4]. By 2023, the integration of AI-enhanced personalization had become widespread, with platforms employing machine learning to develop multivariant user interfaces and conduct behavioral tracking, thereby surpassing traditional product recommendation systems [
3,
21]. This advancement reflects a broader trend in digitalization, wherein billions of individuals engage in online shopping daily, necessitating seamless and adaptive experiences among an abundance of choices [
3].
Personalization has emerged as a central marketing strategy owing to its proven ability to reduce cognitive load, optimize relevance, and improve consumer outcomes. In an age characterized by an abundance of information, artificial intelligence serves to filter extraneous data by offering relevant recommendations, thereby facilitating decision-making and reducing the phenomenon of choice paralysis. This is exemplified by intelligent search engines that prioritize items that match user queries [
20,
23]. This relevance boosts engagement, satisfaction, and loyalty; customized recommendations align with core needs, increasing click intentions, repeat purchases, and lifetime value [
2,
21]. Firms leveraging AI report up to 40% revenue gains and 20% satisfaction lifts, outpacing competitors through precision targeting that fosters perceived value and emotional connections [
24,
25]. In contrast to mass marketing, it allows for strategies involving “thousands of people, thousands of faces,” thereby boosting conversion rates through predictive incentives [
20]. From a sustainability perspective, these efficiency gains underscore the strategic importance of personalization while also raising questions about the long-term social implications of intensive data use.
Although research on artificial intelligence and personalization is extensive, it remains predominantly concentrated in developed Western markets and major Asian economies, such as China [
4,
21,
26]. This contextual imbalance highlights the critical need to examine AI adoption and its psychological impact in emerging markets such as Turkey. The literature clearly indicates that cultural differences, economic dynamics, and technological readiness in developing countries may constrain the direct applicability of findings from the U.S. and Europe. Consequently, localized empirical studies are crucial for enhancing the external validity and contextual breadth of AI marketing knowledge [
4,
21].
In Turkey, the e-commerce ecosystem mirrors this global surge with rapid AI adoption. Dominated by local giants Trendyol (Alibaba-backed, 138 million monthly visits, fashion-focused with proprietary brands) and Hepsiburada (pioneering books-to-multicategory platform offering secure mobile shopping), these platforms drive exports and SME growth [
19,
27]. Trendyol and Hepsiburada have integrated AI for competitive advantages, such as personalized logistics and recommendations, capitalizing on post-pandemic digital shifts in high-value sectors [
28,
29]. With companies such as Amazon entering the scene, advancements in personalization through machine learning-powered interfaces and chatbots are strengthening Turkey’s oligopolistic market, aligning with the behaviors of regional consumers [
19,
30]. This data-intensive AI paradigm significantly shapes consumer experiences, offering engagement benefits while also presenting challenges related to privacy risks, algorithmic bias, and trust erosion [
2,
10]. Accordingly, the sustainability of AI-driven personalization depends not only on efficiency gains but also on the presence of ethical governance mechanisms that protect consumer well-being in AI-mediated commerce. In the Turkish context, these dynamics underscore the need for transparent and responsible AI adoption to sustain long-term digital market growth.
2.2. Consumer Psychological Well-Being in AI-Driven E-Commerce
Consumer psychological well-being in digital contexts refers to individuals’ subjective assessments of their mental and emotional states during interactions with technology-mediated environments, such as e-commerce platforms [
31]. Positive outcomes in these contexts include satisfaction, flow, autonomy, and flourishing, whereas negative outcomes include anxiety, overload, and reduced self-efficacy [
11,
13]. Unlike general mental health, consumer psychological well-being emphasizes technology-related aspects such as perceived competence (proficiency in digital activities), warmth (emotional satisfaction through AI interactions), and the balance between hedonic pleasure (instant gratification) and eudaimonic growth (meaningful involvement) [
13,
32,
33]. In e-commerce, this is evident in shopping experiences that either foster happiness, fluency, and loyalty or induce stress due to intrusive personalization [
5,
20]. Frameworks highlight self-determination through the satisfaction of basic needs, such as autonomy, competence, and relatedness, within digital practices that can lead to harm (e.g., addiction) or benefits (e.g., empowerment) [
11,
34]. Accordingly, consumer psychological well-being has emerged as a critical evaluative outcome in AI-mediated consumption environments.
The theoretical foundations draw from digital well-being theories, integrating hedonic and eudaimonic perspectives within socio-technical systems. Digital well-being considers subjective well-being within media-rich environments, where activities such as scrolling or engaging with AI offer both benefits, such as convenience, and drawbacks, such as surveillance [
34]. Hedonic well-being, similar to subjective well-being, emphasizes pleasure, happiness, and life satisfaction and is assessed through positive emotions and the absence of stress in digital services [
35,
36]. Eudaimonic well-being, on the other hand, emphasizes meaning, personal growth, relationships, and achievement, aligning with the PERMA model (positive emotion, engagement, relationships, meaning, and accomplishment) [
13,
32,
36]. Socio-Technical Systems theory conceptualizes technology as being strongly linked to human components. AI platforms contribute to human flourishing through adaptive interfaces; however, they also present risks such as “parametric reductionism,” which involves the oversimplification of choices, and “agency transference,” which entails the relinquishment of control [
11,
12]. The Technology and Consumer Well-Being Paradox Model illustrates this contradiction: while technology enhances eudaimonic depth through personalized discovery, it simultaneously diminishes hedonic pleasure due to overload [
13]. According to Self-Determination Theory, the interaction between humans and AI is influenced by perceived interactivity, which enhances value and well-being, with privacy acting as a moderating factor [
31].
Digital environments profoundly support well-being by reducing cognitive load through intuitive designs and personalized flows and enhancing engagement and satisfaction [
32,
37]. The quality of AI services, characterized by technical reliability, personalization, and hyperconnectivity, enhances perceived competence and warmth of the AI. This, in turn, improves well-being in the retail sector by providing seamless recommendations that affirm user agency [
33,
38]. In the domains of and e-commerce, digital interfaces facilitate the PERMA model by fostering positive emotions through gamified shopping experiences, enhancing engagement via immersive extended reality (XR)-like previews, and creating meaning through personalized value co-creation [
36,
39]. Mental models indicate that consumers perceive AI-driven personalization as a means of enhancing individual well-being, such as through efficient decision-making, and societal benefits, including inclusive access. This perception contributes to the development of trust-satisfaction-loyalty chains [
5,
17].
In contrast, digital environments can negatively impact well-being due to the tensions between autonomy and technology. Specifically, AI-generated recommendations may provoke reactance, thereby reducing both performance expectancy and satisfaction [
38]. Algorithmic opacity and microtargeting induce surveillance fears, algorithmic bias, and over-reliance, constraining self-identity and relational dynamics [
11,
12]. Excessive use leads to overconsumption or multitasking, which diminishes both hedonic pleasure and eudaimonic purposes. Personalization paradoxes further exacerbate fatigue when they do not align with individual preferences [
13,
40]. Privacy concerns mitigate the advantages of interactivity, thereby undermining psychological well-being in data-intensive e-commerce environments [
31,
41]. This context reveals dualities: while services such as AI assistants enhance fluency and happiness, they also pose the risks of addiction and reduced autonomy [
20,
35].
This duality is critically relevant to consumer behavior and human-AI interaction in e-commerce. Well-being drives loyalty and engagement; positive psychological loyalty correlates with repeat purchases and revenue, whereas negative loyalty spurs churn [
13,
21]. AI personalization affects decision-making through perceived utility and trust, which mediate engagement; however, it may falter if autonomy is compromised [
37,
42]. In human-AI interactions, maintaining a balanced level of agency is crucial for sustaining motivation. A moderate degree of algorithmic autonomy optimizes purchasing behavior, following an inverted effect. In contrast, extreme levels of personalization, whether excessive or insufficient, tend to increase user reactance [
38,
42]. In the context of Turkish platforms such as Trendyol, where rapid AI adoption mirrors global trends, promoting digital well-being necessitates the implementation of ethical designs that emphasize transparency, self-efficacy, and adherence to cultural privacy norms. Such designs are essential for improving behavioral outcomes, including sustained customer loyalty, particularly in the face of tensions between personalization and privacy [
10,
17]. Ensuring alignment between technological capability and consumer psychology is thus central to evaluating the long-term viability of AI-driven personalization in e-commerce markets.
2.3. Consumer Privacy Concerns in Digital Environments
Consumer privacy concerns reflect individuals’ worries about the collection, storage, use, and potential misuse of their personal information during digital interactions, with particular emphasis on the lack of control over data flows [
8,
9]. These concerns, defined as subjective perceptions of fairness in informational privacy, emerge when technologies facilitate surveillance, unauthorized secondary use, errors, or improper access to data, resulting in feelings of vulnerability [
8,
43]. Privacy concerns are distinct from related concepts: while privacy risks involve objective threats such as data breaches or identity theft, concerns are attitudinal evaluations of these risks [
8,
26]. Privacy calculus is the mental process in which people weigh the advantages of sharing information, such as customization, against the drawbacks, such as losing control. This often helps resolve the “privacy paradox,” which is the discrepancy between expressed concerns and actual data-sharing behavior [
8]. Perceived surveillance, a subset, refers to the subjective feeling of being constantly watched, which heightens discomfort in data-rich environments without specifying particular outcomes [
44].
Communication Privacy Management Theory, developed by Petronio, conceptualizes privacy as a dialectical process wherein individuals co-own the boundaries surrounding shared information and negotiate disclosure rules within relational contexts, such as online platforms [
8]. Theoretical frameworks have changed because of the growth of digital technology. A key issue is informational self-determination, which means having control over how data is collected and used. This is important in e-commerce, where personal information can be very different, such as browsing data compared to health data [
8,
45].
In AI settings, privacy concerns grow because of unclear algorithms. These are like “black boxes” that hide how decisions are made. They can also guess private details from harmless data and create very detailed profiles of people using this information [
2,
26]. AI recommender systems in e-commerce aggregate behavioral traces (e.g., clicks, dwell time) to predict preferences, but Opaque Machine Learning models foster distrust, as users cannot verify fairness or audit inferences such as inferred demographics [
10,
26]. Microtargeting intensifies this issue by facilitating manipulative nudges that heighten surveillance concerns and bias risks, thereby perpetuating disparities [
2,
45]. Beyond general anxieties, context-specific concerns arise: the scale of AI enables unauthorized, low-cost data exploitation, fundamentally altering the nature of privacy [
46,
47]. In Turkey’s booming e-commerce, platforms such as Trendyol amplify these via AI personalization, mirroring global tensions but heightened by regulatory gaps and cultural data-sharing norms [
48].
These developments give rise to the Personalization–Privacy Paradox. Consumers desire personalized experiences that enhance relevance, convenience, and satisfaction, yet simultaneously resist the intrusive data collection required to deliver such benefits [
8,
49]. Empirical evidence indicates that, despite high levels of expressed concern (e.g., widespread discomfort with AI-based targeting), users continue to disclose personal data in exchange for perceived value, consistent with privacy calculus mechanisms [
50,
51]. However, when algorithmic opacity and perceived surveillance dominate, this calculus becomes unstable, leading to trust erosion and heightened psychological discomfort. In AI-driven e-commerce, increased interactivity often coincides with diminished user control, exacerbated by concerns related to biased algorithms and insufficient ethical safeguards [
2,
52]. These dynamics position privacy concerns as a central psychological cost of AI-driven personalization. In the Turkish context, where data protection regulations broadly resemble the GDPR and AI adoption is accelerating, achieving this balance is critical for sustaining trust, legitimacy, and long-term platform viability.
2.4. Brand Trust in AI-Enabled E-Commerce
Brand trust, encompasses broader confidence in a company’s reliability, intentions, and ability to fulfill promises across interactions, extending beyond technology to include service quality and ethical practices [
5,
14]. Key distinctions are evident in the scope and focus: trust in artificial intelligence emphasizes algorithmic opacity and explainability; trust in automation prioritizes functional reliability; and brand trust incorporates relational elements such as consistency and empathy, often alleviating technology-specific concerns [
53].
Trust functions as a psychological mechanism that mitigates uncertainty in data-driven environments. It acts as a heuristic shortcut, thereby reducing cognitive load in situations characterized by information overload and perceived risks associated with surveillance or bias [
15,
54]. In e-commerce, where vast behavioral data fuel personalization, trust fosters psychological safety, enabling disclosure despite privacy fears and encouraging engagement over reactance [
37,
55]. Based on Social Exchange Theory, people expect a return when AI or brands provide value. This reduces uncertainty in opaque systems [
5,
56].
In AI-driven personalization contexts, brand trust attenuates the psychological consequences of privacy concerns through several interrelated processes. Trust reduces anticipatory anxiety by fostering expectations that the firm will act competently, benevolently, and non-opportunistically in handling personal data. Simultaneously, the same time, trust enhances perceptions of procedural fairness and legitimacy, leading consumers to interpret data collection and personalization practices as acceptable rather than threatening. Trust also provides relational assurance that increases consumers’ tolerance for uncertainty, thereby weakening the extent to which privacy concerns translate into psychological distress or diminished well-being. Trust does not eliminate privacy concerns but conditions their emotional and cognitive interpretations in AI-mediated interactions.
Trust is crucial in AI-mediated personalization settings because it fosters satisfaction, loyalty, and adoption. Empirical evidence shows that trust positively influences satisfaction and loyalty, with personalization moderating this relationship [
5]. Transparent AI fosters trust by offering explainability, which addresses fears that undermine confidence [
10,
57]. In particular, trust acts as a moderator or boundary condition in privacy-related judgments, buffering concerns about personalization paradoxes. High trust mitigates privacy calculus trade-offs, where consumers disclose information despite risks, as reliability perceptions outweigh surveillance fears [
37,
57]. Privacy worries amplify when AI infers sensitive data, but brand assurances restore trust, especially among privacy-sensitive users [
14]. However, this buffering role should not be interpreted as implying that trust fully resolves privacy-related concerns. Rather, trust may increase consumers’ tolerance for privacy intrusions without necessarily addressing the underlying ethical or governance challenges associated with intensive data use. Therefore, brand trust functions as a conditional psychological buffer that shapes how privacy concerns translate into psychological well-being and relational stability without eliminating underlying privacy risks or resolving broader governance challenges associated with intensive data use [
37,
58].
2.5. Conceptual Framework and Hypotheses Development
Building on prior research that conceptualizes AI-driven personalization as a core capability of sustainable digital commerce, perceived AI-driven personalization, which involves consumers’ subjective evaluation of customized recommendations, interfaces, and content through machine learning and predictive analytics, contributes to psychological well-being by reducing cognitive overload and increasing relevance in e-commerce [
5,
20]. This aligns with Self-Determination Theory, which states that customization satisfies autonomy, competence, and relatedness needs, promoting hedonic pleasure (e.g., enjoyment, flow) and eudaimonic flourishing (e.g., purpose, engagement) [
11,
17,
42].
Although prior research recognizes that AI-driven personalization encompasses multiple attributes, these elements are often examined independently or aggregated, with limited theoretical attention to whether they operate through distinct psychological mechanisms. In particular, their differentiated implications for consumer psychological well-being remain unexplored. Addressing this gap, the present study conceptualizes perceived AI-driven personalization as comprising two analytically separable dimensions—perceived relevance and perceived specificity—that exert qualitatively different effects on consumer psychological well-being.
Perceived relevance reflects the extent to which AI-generated recommendations align with consumers’ goals, preferences, and needs. Recent research indicates that relevance-oriented personalization enhances perceived value and decision efficiency by reducing information overload and simplifying choice processes in digital environments [
59,
60]. In AI-driven e-commerce, such efficiency gains support effective decision-making, increase perceived competence, and foster positive affective responses, all of which are central components of consumer psychological well-being [
61]. Moreover, empirical studies show that personalization perceived as relevant improves satisfaction and engagement without necessarily eliciting strong concerns about data intrusiveness, provided that the recommendation logic remains congruent with user expectations [
62].
In contrast, perceived specificity captures the degree to which personalization signals deep, granular, and potentially intrusive knowledge about the individual consumer. Highly specific AI-driven personalization increases consumers’ awareness of extensive data collection and inferential profiling, thereby heightening their perceptions of surveillance and violations of personal boundaries [
63]. Recent studies have further demonstrated that when personalization crosses perceived personal boundaries, consumers experience heightened discomfort, resistance, and autonomy threats, even when recommendations remain functionally accurate [
64,
65]. These responses undermine core psychological needs related to autonomy and control, which are essential for sustaining psychological well-being in AI-mediated consumption environments [
31,
66].
Although perceived specificity and consumer privacy concerns are closely related, they remain conceptually distinct. Perceived specificity reflects a perceptual cue indicating the depth and granularity of personalization, whereas consumer privacy concerns represent an evaluative psychological response characterized by anxiety, perceived loss of control, and discomfort regarding the use of personal data. Thus, specificity functions as an antecedent signal, whereas privacy concerns capture the psychological cost that may be activated in response to such signals.
Accordingly, although perceived relevance and perceived specificity both originate from AI-driven personalization, they operate through two distinct psychological pathways: a cognitive efficiency pathway, whereby relevance enhances psychological well-being by reducing effort and increasing perceived value, and a boundary regulation pathway, whereby specificity activates privacy-related autonomy threats that undermine well-being. This distinction helps explain why increasingly sophisticated AI personalization does not uniformly improve consumer well-being.
In summary, AI-driven personalization is more likely to enhance consumer psychological well-being when it is perceived as relevant rather than excessively specific. Empirical research has demonstrated that personalization enhances satisfaction and loyalty, increases engagement by 20%, and augments perceived control [
5,
21,
67]. In smart systems, high personalization perceptions exceed intrusiveness, directly enhancing the well-being of personalization-prone users [
17]. Consequently, the perception of AI-driven personalization positively influences consumer psychological well-being. This leads to the following hypothesis:
H1. Perceived AI-driven personalization positively influences consumer psychological well-being.
The perception of AI-driven personalization is positively correlated with consumer privacy concerns, as tailored recommendations and interfaces suggest extensive data collection which evokes fears of surveillance and loss of control in e-commerce [
8]. This concurrent increase in utility (personalization) and the necessity for data disclosure (risk) precisely characterizes the Personalization–Privacy Paradox (PPP) [
9,
64,
68]. Personalization relies on AI algorithms that can be opaque, potentially leading consumers to feel that their data is being collected stealthily or ubiquitously, a phenomenon known as perceived surveillance [
46]. The more accurate and tailored the personalization, the higher the possibility of consumers perceiving privacy violations by AI algorithms, making them feel watched [
8,
69].
Empirical evidence confirms that higher levels of personalization increase privacy concerns. As data sensitivity increases, consumers begin to perceive cues as intrusive, leading to reduced acceptance [
26,
70]. Research indicates that recommender systems intensify these concerns through implicit profiling, which fosters reactance and reluctance to self-disclose [
9,
71]. In AI contexts, opaque algorithms intensify this link, as platforms like Amazon infer preferences from big data, tempering loyalty gains with equity doubts [
10]. Thus:
H2. Perceived AI-Driven Personalization is Positively Associated with Consumer Privacy Concerns (the Personalization–Privacy Paradox).
The implementation of AI-driven personalization, while offering clear utilitarian benefits, concurrently introduces a set of psychological risks stemming from the technology’s extensive reliance on consumer data [
8]. Consumer privacy concerns specifically refer to the anxiety and perceived loss of control experienced by individuals regarding the collection, analysis, and potential misuse of their personal information by e-commerce platforms [
46,
72].
This issue presents a direct conflict with the fundamental elements of Psychological Well-Being (PWB). PWB is contingent on the satisfaction of psychological needs, such as autonomy, competence, and a sense of control over one’s environment [
66,
73]. When AI systems evoke perceptions of surveillance, identity theft, or unauthorized secondary use [
46], they undermine consumers’ perceived security and mastery, thereby compromising essential psychological needs that are critical for PWB [
74]. The ensuing anxiety and psychological discomfort resulting from engaging in the privacy calculus, which involves balancing risks against benefits, diminishes psychological fulfillment [
8,
9]. Consistent with empirical evidence, heightened privacy concerns function as a significant negative boundary condition, reducing the transformation of perceived value (derived from personalization) into positive psychological outcomes [
31]. Therefore:
H3. Consumer privacy concerns negatively influence consumer psychological well-being.
AI-driven personalization significantly enhances customer utility; however, its effectiveness depends on the ongoing and intensive collection of consumer data [
22]. The Personalization–Privacy Paradox posits that the pursuit of relevance necessitates data disclosure, thereby establishing a positive correlation between heightened perceived personalization and increased consumer privacy concerns [
75]. This relationship is driven by consumers’ perceptions of surveillance or unauthorized secondary use, which are recognized manifestations of privacy risks in AI-based recommendation systems [
8,
46].
As the degree of personalization intensifies, it amplifies perceived privacy concerns, which, in turn, function as a psychological cost within the framework of Privacy Calculus Theory [
76]. These privacy concerns serve as negative attitudinal constraints, posing a threat to consumers’ perceived control over their digital environment. Research indicates that privacy concerns adversely influence the relationship between perceived value derived from interaction and psychological well-being [
31], thereby diminishing an individual’s sense of autonomy and fulfillment of well-being [
66,
73]. Consequently, the requirement for data in personalization indirectly reduces psychological well-being through privacy anxiety, suggesting a mediation effect.
H4. Consumer privacy concerns mediate the relationship between perceived AI-driven personalization and consumer psychological well-being.
Privacy Calculus Theory posits a trade-off mechanism whereby consumers evaluate the costs, specifically privacy concerns, against the benefits, such as personalization utility, prior to engaging in data disclosure [
8]. This evaluative process is notably influenced by trust, which serves as a crucial psychological buffer that mitigates perceived risk and uncertainty within digital environments [
77,
78].
Trust is willingness to be vulnerable to a partner’s actions despite potential risks [
77]. In the context of AI-driven commerce, elevated trust in the platform or system functions as a “psychological lubricant,” enabling consumers to reduce the cognitive monitoring costs associated with privacy infringement [
22,
79]. When brand trust is high, consumers are more likely to perceive the exchange as fair and legitimate, even when faced with high levels of personalization that require data sharing [
26].
In this evaluative process, brand trust shapes how privacy-related risks are psychologically appraised rather than merely accepted. Given that consumer privacy concerns undermine psychological well-being (PWB) by diminishing perceived autonomy and control [
31,
66], high levels of brand trust weaken the extent to which such concerns translate into anxiety, perceived vulnerability, and psychological discomfort. When trust in the brand is established, data collection and personalization practices are more likely to be interpreted as legitimate and aligned with consumers’ interests, thereby attenuating the negative psychological consequences of privacy concerns. Importantly, this buffering role does not imply that trust eliminates privacy risks or resolves underlying ethical tensions; rather, it conditions the emotional and cognitive impact of privacy concerns on psychological well-being.
Therefore:
H5. Brand trust moderates the relationship between consumer privacy concerns and psychological well-being, such that the negative association is weaker at higher levels of trust.
Considering the relationship between variables in the literature, the research hypotheses are presented in
Figure 1.