1. Introduction
Between 2020 and 2025, the accelerated expansion of artificial intelligence (AI) profoundly reshaped the ways people work, learn, communicate, and manage their emotional well-being. Advances in machine learning, natural language processing, and generative systems have made AI an integral part of everyday life [
1,
2]. This integration has redefined human interaction patterns and the very notion of psychological balance within a society increasingly mediated by algorithms [
3,
4]. However, alongside promises of efficiency, personalization, and emotional assistance, new ethical and psychosocial questions have emerged regarding AI’s effects on identity, autonomy, and the quality of affective bonds [
5,
6,
7]. Although the temporal frame “2020–2025” might appear arbitrary at first sight, it corresponds to a period marked by (a) the global digital acceleration triggered by COVID-19, (b) the mass adoption of conversational agents and generative AI, and (c) the consolidation of empirical studies that allow for a coherent synthesis of psychosocial trends. This interval therefore provides a conceptually meaningful window to examine how AI-mediated emotional experiences evolved during a phase of unprecedented technological uptake.
From a psychosocial perspective, the advent of AI has altered the dynamics of human relationships, shaping both perceptions of social support and experiences of loneliness. The so-called “emotional hyperconnectivity” has partly replaced face-to-face encounters with digital interactions, in which emotions are interpreted, classified, or even generated by automated systems [
8,
9]. While this transformation has expanded access to support resources and reduced geographical and economic barriers [
10,
11,
12], it has also raised concerns about depersonalization and the erosion of emotional authenticity [
5,
13,
14].
The emergence of generative AI tools has intensified the tension between autonomy and technological dependence. Interactive AI systems and self-care applications offer immediacy, privacy, and assistance—qualities particularly appealing to those seeking emotional support [
1,
3]. Yet these same features can foster excessive reliance on technology for emotional decision-making, encouraging dependency on devices and gradually replacing human contact with digital bonds [
6,
15]. This phenomenon illustrates what has been termed the “digital affective paradox”: as technological connectivity increases, people do not necessarily feel more connected. In fact, the more they are accompanied by digital systems, the harder it becomes to sustain genuine relationships and a stable sense of belonging [
7,
9].
Although recent literature on AI and well-being has grown considerably, most studies focus on clinical or functional outcomes—such as stress reduction, learning enhancement, or work efficiency [
16,
17,
18]. Few, however, examine the relational, identity-related, and ethical implications of coexisting with intelligent systems [
1,
2]. This reveals a more precise knowledge gap: current research lacks integrative frameworks capable of explaining the constitutive tensions that arise when humans and AI systems co-shape affective experiences. Existing studies describe benefits and risks, but they rarely articulate how technological infrastructures, psychosocial processes, and ethical dilemmas interact dynamically. In particular, socio-technical and socio-material perspectives—central for understanding human–technology co-constitution—remain underutilized in the emotional well-being literature.
Moreover, existing studies tend to approach the relationship between AI and well-being from isolated disciplinary domains—clinical, educational, or organizational—without exploring their intersections [
3,
4]. While mental health research highlights AI’s therapeutic potential and its ability to broaden access to care [
5,
12], organizational studies warn of digital stress and loss of meaning at work [
19,
20]. This lack of integration constrains a comprehensive understanding of AI’s impact on human well-being and hinders the development of ethical policies that balance innovation with humanization [
2,
21]. Thus, the gap is not merely empirical but conceptual: we lack a model that situates emotional well-being within a multilayered system where AI operates simultaneously as a technological infrastructure, a relational mediator, and an ethical actor.
It is therefore essential to promote a critical reflection that transcends technological enthusiasm and focuses on the relational nature of emotional well-being [
4,
22]. Understanding AI’s impact from this lens entails recognizing that well-being depends not merely on access to digital tools but on the quality of human experiences those tools facilitate or transform [
9]. In this sense, AI should be understood not as a functional mechanism but as a relational actor actively shaping emotions, values, and contemporary social interactions [
5,
6].
The pandemic and post-pandemic contexts further accelerated this emotional-technological integration [
16]. During the COVID-19 pandemic, the massive use of digital platforms and virtual psychological support assistants demonstrated both the therapeutic potential of technology and its ethical limitations [
12,
23]. The experiences accumulated between 2020 and 2025 reveal a significant evolution in how individuals experience intimacy, vulnerability, and emotional companionship in AI-mediated environments [
7,
9]. This period has therefore become an analytically coherent corpus from which to examine emerging psychosocial patterns, rather than a merely chronological delimitation. Yet, an integrative understanding capable of articulating these insights from an interdisciplinary and ethical standpoint remains scarce [
2,
21].
Against this backdrop, the present article offers a critical reflection on the psychosocial impacts of artificial intelligence on emotional well-being, drawing upon studies published between 2020 and 2025 in journals ranked in Q1–Q3 of Journal Citation Reports (JCR) and Scimago Journal Rank (SJR). Its aim is to analyze the tensions, paradoxes, and ethical dilemmas arising from the intensive use of intelligent technologies, acknowledging both their benefits and the risks they pose to collective psychological health [
5,
22].
Furthermore, this work seeks to bridge the gap between clinical and technological approaches through a relational and contextual perspective that situates emotional well-being at the core of the debate on AI [
2,
4]. It assumes that well-being is a hybrid construct shaped by the continuous interaction between human and artificial intelligence, in which technology can either mitigate or intensify distress [
9,
21]. Finally, the study invites an interdisciplinary dialogue aimed at fostering a more conscious, ethical, and emotionally sustainable relationship with AI, recognizing that human well-being largely depends on the quality of both human and digital relationships shaping contemporary emotional life [
1,
6].
Methodology
This study adopts a critical reflection approach, grounded in an analytical review of scientific literature published between 2020 and 2025 on the psychosocial impact of artificial intelligence (AI) on emotional well-being. This approach is particularly suitable when the purpose is not to measure or empirically verify a phenomenon but to interpret, contextualize, and reconstruct it theoretically. As Booth et al. [
24] and Torraco [
25] argue, critical reviews aim to integrate findings from diverse disciplines and perspectives to achieve a deeper understanding of complex and emerging phenomena.
The choice of this design responds to the dynamic nature of AI, whose social and psychological effects evolve faster than empirical validation can capture. Consequently, this approach moves beyond the mere description of results to focus on meaning-making processes and conceptual tensions arising from the relationship between technology and emotional well-being. According to Snyder [
26], critical reviews contribute to identifying knowledge gaps, examining implicit assumptions, and proposing new interpretive frameworks for future research.
Unlike systematic review models or protocols such as PRISMA—which seek comprehensiveness and replicability—the critical reflection approach prioritizes conceptual depth, argumentative coherence, and the capacity for synthesis as indicators of academic rigor. Rather than generating closed conclusions, this type of review opens questions and expands interpretive frameworks from an interdisciplinary standpoint.
The corpus analyzed includes 40 scientific publications selected for their theoretical relevance, quality, and currency. Sources were extracted from Scopus, Web of Science, and PsycINFO, restricting the search to journals indexed in Q1 to Q3 of the SJR or JCR systems. Inclusion criteria considered: (a) thematic relevance—i.e., the relationship between AI and emotional well-being or mental health; (b) temporal relevance (2020–2025); and (c) scientific solidity—ensured through peer review and publication in recognized academic outlets.
To ensure methodological transparency, the search strategy incorporated explicit Boolean strings. The core search formula was: (“artificial intelligence” OR “AI” OR “machine learning” OR “chatbot” OR “conversational agent” OR “generative AI”) AND (“emotional well-being” OR “emotional wellbeing” OR “mental health” OR “emotional support” OR “affective”). Filters included: publication years 2020–2025; language = English; document type = articles or reviews; inclusion only of journals indexed in JCR or SJR (Q1–Q3). The searches were conducted in Scopus, Web of Science, and PsycINFO between February and March 2025. These parameters guaranteed conceptual coherence and methodological robustness.
The selection process followed a structured sequence summarized in
Figure 1 (Flow Diagram). After removing duplicates, titles and abstracts were screened for topical alignment; full texts were then evaluated based on inclusion and exclusion criteria. Studies were excluded if they (a) lacked direct relevance to emotional well-being; (b) examined AI only from technical or engineering perspectives; (c) fell outside the 2020–2025-time frame; or (d) did not meet peer-review and indexation criteria. The final sample comprised 40 articles that met all eligibility criteria.
The review process unfolded in three phases: an exploratory reading to identify preliminary categories, a thematic clustering of findings around positive impacts, ethical dilemmas, and emerging challenges, and an interpretive synthesis that connected empirical evidence with conceptual frameworks.
During the exploratory phase, preliminary categories were defined inductively. Two independent readings of approximately 15% of the corpus were used to generate initial codes such as “AI-mediated emotional support,” “erosion of authenticity,” “algorithmic dependence,” and “relational displacement.” These preliminary categories provided the foundation for the subsequent thematic clustering. The clustering phase grouped studies based on converging patterns at cognitive, affective, relational, and ethical levels. This process made it possible to identify latent tensions—particularly the contradiction between expanded technological accessibility and diminished emotional authenticity—consistently highlighted in the recent literature.
Finally, the interpretive synthesis involved triangulating findings with psychosocial, ethical, and sociotechnical perspectives. This stage sought not only to summarize evidence but to articulate the constitutive tensions that emerge when humans and AI systems co-shape emotional experiences. Interpretive synthesis was supported by constant comparison techniques, allowing the contrasting of conceptual approaches and the identification of emerging theoretical gaps.
Although it does not conform to formal protocols like PRISMA, the study-maintained standards of rigor, transparency, and internal coherence to ensure scientific validity. Consistency between objectives, theoretical framework, and conclusions was guaranteed through conceptual triangulation, that is, the systematic comparison of arguments and evidence from different authors to identify convergences, contradictions, and theoretical gaps [
24].
A reflexive and self-critical stance was also adopted regarding the researcher’s role as a mediator of knowledge. Recognizing the limits of interpretation—particularly in a rapidly evolving field—is essential to avoid confirmation bias or technocentric perspectives. Consequently, this critical review does not aim to replace systematic evidence but to complement it, offering a more integrative and human perspective on the psychological and social changes linked to technological expansion.
In summary, the methodology ensures a balance between academic rigor and interpretive flexibility, situating the debate on AI and emotional well-being within an interdisciplinary framework of reflection. Its value lies in generating renewed readings of the phenomenon, identifying conceptual gaps, and contributing to a more ethical, critical, and emotionally sustainable understanding of the relationship between artificial intelligence and human experience.
Figure 1 illustrates the full selection process in six sequential stages, enhancing transparency and facilitating replicability.
2. Psychosocial Impact of Artificial Intelligence on Emotional Well-Being: Review and Critical Analysis
2.1. State of the Art and Theoretical Gap
Over the past five years, research on artificial intelligence (AI) and emotional well-being has expanded across clinical psychology, organizational behavior, human–computer interaction, and educational technology. Existing reviews have examined AI-supported mental-health interventions, the effectiveness of conversational agents, human–AI trust, ethical tensions, and the psychosocial consequences of AI integration in work and learning environments. These studies collectively show that AI increasingly mediates emotional expression, interpersonal connection, and everyday self-regulation, while simultaneously raising concerns about dependence, depersonalization, and loss of authenticity. Despite this growth, current reviews tend to remain domain-specific—focusing either on clinical outcomes, organizational productivity, or learning processes—without offering an integrated psychosocial and ethical perspective. Moreover, theoretical grounding is often implicit: key frameworks such as human-centered AI, mediated social interaction, theories of dehumanization, and models of trust in automation are referenced unevenly and rarely synthesized into a coherent analytic structure.
This article addresses this gap by offering a cross-contextual, integrative framework that explains how AI systems shape emotional well-being across technological, relational, and existential dimensions. By consolidating insights from psychology, ethics, organizational studies, and education, the review advances a unified conceptual model that captures the paradoxical nature of AI-mediated emotional support: its capacity to expand access to care and connection while simultaneously introducing risks of emotional over-reliance, reduced self-regulation, and symbolic forms of dehumanization. In doing so, the article clarifies the mechanisms through which AI co-constructs emotional experiences and provides a theoretically grounded foundation for understanding its impact on contemporary psychosocial life.
2.2. Transformations in Emotional Experience and Human Relationships
Artificial intelligence (AI) has become one of the key mediators of contemporary emotional life, subtly intervening in how individuals experience and make sense of their inner states. Its presence in workplaces, educational settings, and domestic environments has not merely added a technological layer to daily routines; it has reshaped the architecture of emotional expression, influencing how people interpret, externalize, and negotiate feelings with others. More than a technical instrument, AI increasingly operates as a relational environment that conditions the rhythms of empathy, communication, and mutual recognition [
1]. This shift reflects a broader sociotechnical reconfiguration in which emotional life becomes entangled with algorithmic systems that interpret, classify, and sometimes simulate affective cues.
One of the most notable transformations is the growing delegation of emotional labor to automated systems. Applications offering psychological companionship, virtual assistants that simulate attentive listening, and algorithms capable of recognizing moods through voice or facial expressions are redefining what users expect from emotional support [
11]. These technologies promise constant availability, discretion, and immediacy—features that may be appealing in contexts of loneliness or stress. At the same time, their simulated empathy introduces an ambiguous form of relationality: interactions feel personal yet lack intentionality, reciprocity, and moral depth [
8]. This paradox illustrates a broader tension between what AI appears to provide (understanding) and what it can actually offer (pattern-based approximation).
The rise in emotional hyperconnectivity intensifies this dynamic. Increasingly, individuals rely on digital platforms to express emotions or seek comfort, often replacing—or deprioritizing—face-to-face interactions [
7]. While this shift may create a sense of immediate community, it also contributes to a fragmentation of emotional presence, where speed and responsiveness overshadow depth and reflection. As Alhuwaydi [
13] and Gual-Montolio et al. [
14] emphasize, when connection becomes synonymous with constant availability, traditional markers of emotional support—silence, physical proximity, shared temporality—lose symbolic value. This erosion suggests that AI-mediated communication does not merely alter emotional expression but transforms collective norms about what counts as genuine interpersonal engagement.
From a psychosocial perspective, AI alters not only the channels of affective communication but also the meanings and social valuation of emotions themselves. In many digital ecosystems, algorithms amplify content that triggers strong reactions, reinforcing an affective culture driven by immediacy, visibility, and polarization. This emphasis encourages emotional exposure and external validation, shaping self-perception in ways that affect psychological stability and relational authenticity [
1]. Under these conditions, emotions risk becoming performative commodities rather than relational expressions rooted in shared experience.
The increasing sophistication of generative systems introduces further complexity. Some users develop emotionally meaningful bonds with AI assistants that provide a sense of understanding without judgment. These interactions may offer temporary relief but also reconfigure expectations about intimacy, introducing forms of “safe” yet asymmetrical emotional attachment that diverge from human relational norms [
15]. This dynamic echoes what Velastegui-Hernández et al. [
7] identify as simulated empathy—responses that mimic human sensitivity but lack existential authenticity and embodied resonance. Over time, these mediated intimacies may alter how individuals evaluate vulnerability, trust, and emotional reciprocity in their human relationships. Taken together, these transformations reveal that AI not only expands possibilities for emotional interaction but also redefines the cultural and relational grammar of companionship, care, and affective understanding. As boundaries between human and technological agents become increasingly blurred, critical questions emerge regarding how empathy, intimacy, and emotional well-being are being reconstructed. Understanding these dynamics is essential to evaluate how AI is reshaping not only behaviors and communication patterns but also the very texture of human sensitivity and the capacity to sustain meaningful connections in a profoundly digitalized social world.
2.3. Psychological Well-Being and the Paradoxes of Technological Companionship
The relationship between AI and psychological well-being has become one of the central debates of the past decade, not only because intelligent systems have entered clinical, educational, and organizational domains, but because they now mediate how individuals regulate emotions, relate to others, and interpret their own vulnerabilities. While these technologies offer unprecedented therapeutic opportunities, they also generate ethical, relational, and psychosocial dilemmas that require a careful and nuanced analysis [
27,
28,
29]. From a sociotechnical perspective, AI does not merely support well-being; it actively reshapes the environments, expectations, and relational structures through which well-being is constructed.
2.3.1. The Therapeutic Dimension of Digital Companionship
A growing body of research highlights the potential of AI to expand access to psychological support and enhance the effectiveness of interventions. Virtual assistants, mobile applications, and machine learning algorithms enable continuous emotional monitoring, early detection of symptoms, and tailored coping strategies [
14,
30]. During the COVID-19 pandemic, the global rise of mHealth tools revealed the capacity of AI to act as an affective bridge between clinical expertise and everyday life, providing scalable emotional companionship while facilitating large-scale surveillance of psychological states [
31]. These advances illustrate how AI becomes embedded in the affective infrastructures of daily routines, blurring distinctions between clinical care, personal self-regulation, and consumer technology.
AI-based emotional support systems have shown particular efficacy in mood regulation and patient follow-up [
21,
32]. By translating behavioral, linguistic, or physiological data into personalized feedback, these tools strengthen emotional self-efficacy and promote reflective understanding of one’s mental states [
22]. Schiff et al. [
33] consider that ethical frameworks such as IEEE 7010 have further consolidated the evaluation of technological impacts on well-being, reinforcing the notion that AI can contribute to collective mental health when designed and deployed responsibly. In educational settings, AI has also been linked to increases in creativity, motivation, and positive affect when embedded in supportive pedagogical ecosystems [
1,
34].
Despite these benefits, recent studies caution against idealizing AI as a neutral or purely supportive tool. Authors such as Alhuwaydi [
13] and Beg et al. [
28] conceptualize AI as an “emotional amplifier”—a system that can enhance human care through availability and simulated empathy, but that also introduces new forms of dependence and algorithmic mediation. In sociotechnical terms, this means that therapeutic support becomes co-produced by human intentions and machine logics, resulting in hybrid companionships that challenge traditional notions of empathy, vulnerability, and psychological safety. Furthermore, AI-mediated emotional support does not replace the human bond but transforms it. Rather than functioning as a supplementary aid, AI increasingly becomes a relational co-actor that shapes expectations about immediacy, emotional response, and the very meaning of being “understood.” This hybridization redefines psychological accompaniment within environments where human and algorithmic agents coexist, negotiate roles, and co-construct emotional norms.
2.3.2. The Paradoxes of Technological Support
Despite its promise, the digital mediation of well-being introduces structural contradictions that complicate AI’s role in psychological care. One of the most prominent tensions is the “paradox of artificial empathy,” in which users report comfort, support, and emotional resonance despite being fully aware that the system lacks genuine intentionality [
8]. This coexistence of felt proximity and ontological distance invites a deeper evaluation of authenticity in digital care. From a sociotechnical perspective, artificial empathy constitutes not merely an illusion of understanding but a reconfiguration of the norms that define what counts as emotionally meaningful in technologically saturated environments. As AI becomes increasingly embedded in psychological assistance, it risks normalizing a “soft dehumanization” of care, where well-being is assessed in terms of efficiency, immediacy, and automated responsiveness rather than relational depth [
35]. This shift subtly redefines expectations: emotional support becomes something to be delivered rather than co-constructed, thereby weakening the reciprocal foundations of human empathy.
Another paradox emerges through algorithmic fatigue—the cognitive, attentional, and emotional overload resulting from constant interaction with intelligent systems [
19,
20]. In organizational contexts, this fatigue undermines autonomy and satisfaction [
17,
18], while in educational settings it fosters anxiety, hypervigilance, or dependency on automated feedback [
7,
36]. Here, AI serves as both a facilitator and stressor, illustrating the ambivalent emotional ecology that characterizes technology-mediated environments. Relational well-being is similarly reshaped. Although digital companionship expands opportunities for connection, it may also generate forms of “connected loneliness,” where interaction is present but meaningfully diminished [
11]. This “absent presence” reveals the fragility of intimacy when emotional exchanges are guided by algorithms rather than lived experience. As Makridis and Mishra [
16] emphasize, AI’s capacity to strengthen or erode relational well-being depends less on its technical features than on the cultural, ethical, and institutional contexts that shape its use. In this sense, AI does not simply mediate relationships—it co-produces the conditions under which relational depth becomes possible or is progressively diluted.
2.3.3. Toward an Integrative Understanding of AI-Mediated Well-Being
The impact of AI on psychological well-being cannot be reduced to a binary of benefits versus risks. Rather, it unfolds along a continuum of mediated emotional experiences in which well-being is negotiated between technological autonomy, human vulnerability, and the sociocultural frameworks surrounding digital care [
7]. This integrative lens aligns with Bankins et al. [
27], who conceptualize AI as a relational actor capable of shaping emotions, perceptions, and behavioral patterns within wider social systems [
37]. Such an approach moves beyond instrumental views of technology and positions AI within the moral and affective architecture of everyday life.
From this standpoint, technological companionship is simultaneously a cultural artifact and a psychological process. AI reshapes the meanings of care, attention, and presence by simulating recognition, responsiveness, and emotional attunement. However, these simulations coexist with human needs for authenticity, mutuality, and embodied experience. Thus, the challenge is not to reject AI-mediated companionship but to understand how it modifies the boundaries of emotional dependence and self-regulation.
Well-being in the algorithmic era requires cultivating the ability to coexist with systems that mimic empathy while preserving the centrality of human connection. This involves critical emotional literacy, ethical awareness, and organizational frameworks that ensure technology supports rather than supplants the relational foundations of mental health [
8]. In educational, clinical, and workplace settings alike, the aim should be to integrate AI in ways that empower individuals without compromising their autonomy or their capacity for genuine interpersonal engagement.
In sum, AI opens an ambivalent horizon: it democratizes psychological assistance while exposing individuals to risks of emotional superficiality, dependency, and ontological confusion. Understanding this tension demands an interdisciplinary and ethically grounded approach capable of balancing technological efficiency with humaneness, ensuring that emotional well-being remains anchored in dignity, authenticity, and relational depth.
2.4. Psychosocial and Ethical Tensions of Emotional Well-Being in the Age of Artificial Intelligence
Between 2020 and 2025, the accelerated expansion of AI reshaped how individuals understand, manage, and experience emotional well-being across clinical, educational, organizational, and everyday contexts. Although this period exposed the therapeutic potential of AI—especially during the COVID-19 pandemic—it also illuminated emerging ethical and psychosocial dilemmas. Emotional well-being, commonly conceptualized as a dynamic balance between environmental demands and personal resources, is increasingly negotiated within technological ecosystems that introduce tensions between autonomy and algorithmic influence, personalization and digital standardization, and virtual connectedness and emotional solitude. Rather than representing isolated problems, these tensions form a sociotechnical landscape in which emotional life is continually mediated and reconfigured.
During the pandemic, mHealth applications provided early evidence of AI’s ability to sustain psychological stability in moments of uncertainty. Through machine learning algorithms, these tools monitored mood states, offered guidance, and reduced isolation through responsive digital interactions [
31]. However, the same interfaces that provided companionship partially displaced human bonds, translating emotional support into algorithmic terms. Studies by Beg et al. [
28] and Olawade et al. [
32] describe the rise in hybrid models in which chatbots and cognitive assistants functioned as initial interlocutors—broadening access to care yet raising concerns about the authenticity, depth, and moral grounding of digitally mediated empathy. This shift illustrates not only technological substitution but a cultural redefinition of what “care” and “presence” mean in emotionally demanding contexts.
AI’s emotional influence extended beyond clinical environments. In organizational settings, AI-driven performance management generated ambivalent effects: improved efficiency and reduced cognitive load [
17,
18] were accompanied by diminished meaning, relational disconnection, and emotional fatigue when interactions became excessively automated [
19,
20]. These findings suggest that technological mediation can either empower or erode psychological well-being depending on how it intersects with work culture, managerial norms, and employees’ emotional expectations. Likewise, the tension between support and depersonalization became increasingly evident as AI blurred the boundary between assistance and control.
In educational contexts, AI-based applications influenced learners’ emotional and motivational processes. Research by Dai et al. [
34] and Lin and Chen [
1] highlights positive effects on creativity and emotional self-efficacy when algorithms operate transparently and promote authentic engagement. However, other studies caution that overreliance on AI-mediated feedback may weaken emotional self-regulation and foster a superficial engagement with learning [
7,
36]. This ambivalence reflects a broader tension: AI can scaffold emotional development, yet it may simultaneously displace the reflective and relational experiences necessary for genuine emotional growth.
At a structural and societal level, AI functions as an invisible actor shaping social well-being. Makridis and Mishra [
16] argue that the rise of “artificial intelligence as a service” generates economic gains without necessarily improving subjective well-being, as it tends to reproduce existing digital inequalities. Moghayedi et al. [
11] demonstrate that in industries such as facilities management in South Africa, AI adoption produces both opportunities for inclusion and new forms of precarity. These findings underscore that emotional well-being is not simply an individual psychological state but a socially mediated phenomenon embedded in power dynamics, technological access, and digital divides [
38]. Thus, well-being in the age of AI requires acknowledging that emotional experiences are entangled with broader socio-economic and technological structures.
Ethical concerns constitute another major source of tension. Algorithmic assessments of well-being raise issues related to emotional privacy, data transparency, and the right to disconnect from affective surveillance. IEEE 7010 [
33] represented a step forward by incorporating principles of fairness and autonomy into AI evaluation. Yet, technological evolution has advanced more rapidly than regulatory frameworks, creating what Jeyaraman et al. [
35] call an “ethical enigma”: a widening gap between technical innovation and moral adaptation. This gap increases the risk of emotional manipulation, misuse of sensitive affective data, and confusion between automated care and human relationality. In this sense, the ethical challenge is not merely regulatory but epistemic: societies must reconsider how they define emotional authenticity, responsibility, and consent in technologically mediated environments.
Overall, the literature demonstrates that emotional well-being in the age of AI is relationally and ethically complex. Intelligent systems enhance individuals’ capacity to recognize and regulate emotions [
21,
22], yet they also foster dependencies that externalize the pursuit of emotional validation. This duality reveals that well-being is increasingly co-created between humans and machines—a hybrid emotional ecology in which agency, authenticity, and affective depth are continually renegotiated.
In summary, between 2020 and 2025, AI acted simultaneously as a catalyst and a disruptor of global emotional well-being. Advances in digital mental health, emotional education, and workplace well-being expanded the reach of psychological support, yet also exposed risks related to emotional superficiality, dependency, diminished empathy, and unequal access to digital resources. The central challenge moving forward is not technological sophistication but the collective ability to integrate AI into an ethical and humanistic framework that preserves the authenticity of emotional experience and protects the conditions under which relational well-being can flourish.
2.5. Conceptual Model of Artificial Intelligence (AI)-Mediated Emotional Well-Being
The proposed conceptual model synthesizes the core relationships identified in the reviewed literature between artificial intelligence (AI) and emotional well-being during the 2020–2025 period. Rather than treating AI as an isolated technological artifact, the model conceptualizes it as a multilevel sociotechnical system that operates simultaneously as a technological mediator, relational actor, and emotional modulator. These functions unfold across three interdependent levels—technological–structural, psychosocial–relational, and ethical–existential—whose interactions shape the ambivalent emotional ecology of contemporary digital life.
Figure 2, included below, visually represents these three levels and illustrates the bidirectional flow through which technological infrastructures, relational processes, and ethical–existential dynamics continuously influence one another.
2.5.1. Technological–Structural Level
The technological–structural level encompasses algorithmic systems, chatbots, mHealth platforms, and intelligent digital environments that mediate emotional experience. These systems function as “affective infrastructures” that expand access to psychological support, facilitate emotional self-regulation, and create new spaces for digital companionship [
1,
22,
31].
At this level, technology sets the conditions of emotional possibility: how often users interact, what types of emotional data are collected, and how feedback loops reinforce patterns of dependency, autonomy, or vulnerability. This layer thus determines the structural capacities—and limitations—through which emotional well-being can be mediated by AI.
2.5.2. Psychosocial–Relational Level
The psychosocial–relational level describes the interaction processes between users and AI systems, where the main paradoxes of digital well-being emerge. Central phenomena such as simulated empathy, connected loneliness, and absent presence reveal how AI redefines the meanings of intimacy, trust, and emotional authenticity [
7,
8].
In this layer, technology becomes a relational co-actor, shaping expectations about immediacy, responsiveness, and emotional resonance. Interactions with AI may temporarily alleviate loneliness or stress, yet they can also blur interpersonal boundaries, diminish embodied presence, and weaken the capacity for meaningful relational engagement. This level captures the fluid and often contradictory emotional experiences that arise when humans and algorithms co-construct affective environments.
2.5.3. Ethical–Existential Level
The ethical–existential level integrates the dilemmas that emerge from the algorithmic mediation of emotional well-being, including autonomy, emotional privacy, and the preservation of humanity in contexts of digital care. This level reflects the tension between efforts to embed ethical design principles into AI [
33] and the growing concerns about the dehumanization of psychological support and emotional surveillance [
19,
35]. Here, emotional well-being is framed as a moral and existential question: 1. How much control should be delegated to machines? 2. What forms of emotional data should remain private? 3. What does it mean to be cared for in a digital society?
This dimension reveals the societal consequences of integrating AI into affective life and highlights the normative choices needed to sustain humane emotional ecosystems.
2.5.4. Implications of the Model
Taken together, these three levels form a circular and bidirectional system in which human experiences feed algorithms, and algorithms reshape perceptions, affective patterns, and social relationships. AI-mediated emotional well-being emerges not as a linear cause–effect relation but as a dynamic co-production, where benefits (access, personalization, constant support) coexist with latent risks (affective dependence, algorithmic fatigue, erosion of empathy). The model proposes that sustainable digital emotional well-being can only be achieved when three principles are balanced:
Technological humanization, which situates empathy and ethical intentionality at the core of algorithmic design.
Emotional autonomy, which promotes self-regulation and minimizes technological overdependence.
Transparency and fairness, which protect emotional privacy and ensure the responsible use of affective data.
This conceptual framework encourages viewing AI not as a solution or a threat but as a hybrid emotional ecosystem in which both human and algorithmic agents evolve in parallel. It allows recent literature to be interpreted as an interdisciplinary dialogue that reveals the constitutive tensions of digital emotional life.
By highlighting AI’s triple function—as infrastructure, relational agent, and emotional modulator—the model explains why the effects of AI on well-being are so diverse and often contradictory.
This theoretical foundation prepares the basis for the
Section 3, which examines how these dynamics materialize across three domains—health, work, and education—and how each reflects the central paradox of the algorithmic age: the unstable balance between technological connection and emotional disconnection.
4. Theoretical Implications
The psychosocial analysis of artificial intelligence (AI) between 2020 and 2025 makes it possible to rethink the relationship between technology, emotions, and human well-being from an interdisciplinary standpoint. The findings show that approaches centered exclusively on technological efficiency or instrumental benefits are inadequate for understanding the phenomenon. Instead, it is necessary to advance toward theoretical models that integrate the emotional, relational, and ethical dimensions of human–machine interaction. Within this perspective, AI should be understood not merely as a functional tool but as a relational agent, one that actively participates in shaping social bonds, perceptions of support, and the construction of the digital self [
1,
7]. This interpretation aligns with the multi-level structure proposed in
Figure 2, emphasizing that emotional experience in the digital era emerges from the interplay between technological infrastructures, psychosocial dynamics, and ethical meaning-making.
One of the central theoretical contributions of this study lies in putting forward an integrative perspective on AI-mediated emotional well-being, articulating three interdependent dimensions: psychological, social, and technological. The psychological dimension concerns the internal processes of self-regulation, emotional reflection, and the redefinition of self-perception generated through constant exposure to automated systems. The social dimension refers to the reconfiguration of interpersonal bonds, collective practices, and emerging forms of digital community. The technological dimension encompasses the algorithmic mechanisms that interpret, classify, and modulate affective states. This multi-layered framework acknowledges that contemporary emotional experience is not produced in isolation but instead unfolds in hybrid environments where human emotionality is partially coded, quantified, and reinterpreted by AI [
8,
15]. By synthesizing these three components, the study provides a conceptual foundation for future comparative research on the psychological impacts of AI and the relational sustainability of increasingly digitalized environments.
The study also contributes to the debate surrounding the “digital affective paradox,” defined as the persistent tension between emotional connection and emotional disconnection in hyperconnected societies. AI technologies amplify this paradox: while they expand companionship, guidance, and perceived support, they can simultaneously foster emotional dependence on non-human agents. Such ambivalence challenges classical notions of autonomy, presence, and authenticity, requiring renewed attention to concepts such as digital empathy—the capacity of technological interfaces to simulate emotional understanding—and authentic emotional well-being, understood as the alignment between subjective emotional experience and technological mediation [
11,
14]. From a theoretical standpoint, this tension underscores the need to examine AI not only as a cognitive or instrumental enhancer but also as a co-producer of emotional meaning.
Additionally, the study proposes expanding the conceptual vocabulary of well-being psychology through the notion of algorithmic emotional well-being, defined as the way algorithms filter, quantify, and modulate human emotions. This concept does not promote a deterministic or pessimistic view; rather, it invites reflection on how personalization mechanisms—embedded in self-care applications, mHealth platforms, and emotional support systems—are reshaping emotional balance and decision-making processes. During 2020–2025, the widespread adoption of these tools illustrated that technological mediation can empower self-care practices but may also reduce emotional autonomy depending on users’ levels of digital literacy and critical awareness [
31]. Theoretically, this suggests that emotional well-being can no longer be analyzed without acknowledging the algorithmic architectures that sustain it.
From a sociotechnical standpoint, this work invites scholars to consider AI as a new social and affective actor, one that not only reflects human emotions but also shapes them by anticipating affective states, recommending behaviors, and generating simulated empathetic responses. Understanding AI therefore entails recognizing its agency within a shared emotional ecology where humans and intelligent systems co-construct meaning, expectations, and emotional norms.
Finally, this theoretical reflection reinforces the need for an interdisciplinary framework that brings together social psychology, technological ethics, and digital communication studies. Such integration is essential for explaining emerging phenomena such as emotional depersonalization, affective hyperconnectivity, and the delegation of emotional decisions to automated systems [
42]. Collectively, these contributions help address the knowledge gap identified in this article by offering a robust conceptual foundation for future research on how AI reshapes emotional experience and the dynamics of well-being in digital societies.
5. Practical Implications
The practical implications derived from this critical review highlight that emotional well-being in the age of artificial intelligence (AI) depends not only on technological progress but, fundamentally, on the ethical, relational, and institutional choices guiding its design and use. Consistent with the three-level model proposed in
Figure 2—technological–structural, psychosocial–relational, and ethical–existential—the findings reveal that AI-mediated emotional well-being requires coordinated actions from individuals, organizations, public institutions, and digital communities. Each level interacts with the others and generates both opportunities for support and risks of dependency, depersonalization, or emotional distortion, as shown across the studies analyzed [
5,
11,
43].
5.1. Implications at the Individual Level: Emotional Self-Regulation and Digital Agency
The reviewed evidence shows that individuals increasingly rely on AI systems—such as mHealth apps, chatbots, and virtual assistants—to manage stress, loneliness, or emotional dysregulation [
13,
43,
44]. These tools can expand emotional resources but also facilitate dependence, reduced autonomy, or a displacement of introspection toward automated scripts.
Accordingly, individuals need enhanced emotional and critical digital literacy to distinguish when AI contributes to well-being and when it undermines self-regulation [
45,
46]. Well-being programs, both personal and institutional, should incorporate:
Training in recognizing synthetic empathy and its limits [
5,
28].
Boundaries for digital companionship, preventing excessive reliance on simulated emotional support.
Strategies for conscious digital disconnection, especially for users of personalized or always-on systems.
This aligns with the psychosocial–relational dimension of
Figure 2, which emphasizes the need to preserve autonomy and authentic emotional experience.
5.2. Implications for Organizations: Human-Centered Digital Ecosystems
Across organizational studies, AI adoption improves efficiency but can simultaneously erode professional identity, weaken belonging, or induce digital fatigue [
7,
18,
19,
40]. The findings show that employee well-being emerges from the interplay between technological configurations and relational climates.
Thus, organizations must develop human-centered digital ecosystems, where AI enhances—not replaces—empathy, trust, and meaningful interaction. This includes:
Training leaders in conscious digital leadership, integrating emotional competencies with ethical evaluation of AI use [
47].
Designing workflows that avoid emotional over-automation, ensuring space for human deliberation and interpersonal connection.
Monitoring for algorithmic pressure, surveillance perceptions, and relational erosion, risks repeatedly identified in 2020–2025 evidence.
Organizations should apply the technological–structural level of the conceptual model, ensuring that systems are implemented with relational sensitivity and ethical safeguards [
48].
5.3. Implications for Public Institutions and Policymakers: Emotional Equity and Digital Rights
Empirical findings suggest that AI benefits are unevenly distributed and may reproduce or amplify emotional inequalities, particularly in settings with limited digital literacy or weak regulatory frameworks [
11,
16]. In response, public policy must incorporate emotional well-being as a core criterion in AI governance.
Priority actions include:
Algorithmic transparency requirements that reveal how emotional data are processed.
Affective data protection policies that safeguard users from unintended emotional exposure [
8,
35].
Regulations governing therapeutic, educational, and service-oriented AI, ensuring that systems do not replace human support where relational sensitivity is essential.
Digital inclusion strategies that reduce cultural and technological gaps [
1].
This corresponds to the ethical–existential level of the model, ensuring dignity, safety, and affective equity.
5.4. Implications for Healthcare and Education Systems: Responsible Emotional Mediation
Evidence from mental health and education demonstrates that AI can support diagnosis, monitoring, and emotional accompaniment [
3,
30,
31,
36]. However, these systems can also generate depersonalization, emotional outsourcing, or diminished spontaneity [
4,
7].
Therefore, institutions should:
Integrate impact assessment protocols that evaluate AI-mediated emotional risks, such as dependency, reduced introspection, or decreased belonging.
Use AI tools as adjuncts—not replacements—for human professionals, reinforcing hybrid models of care [
12].
Ensure that adaptive educational systems incorporate emotion-sensitive modules that support creativity, intrinsic motivation, and human connection [
1,
34].
This reinforces the need for relational designs that maintain the centrality of human sensitivity.
5.5. Implications at the Community and Societal Level: Toward Emotionally Sustainable Digital Cultures
The studies reviewed highlight the emergence of new digital communities and affective networks shaped by AI-mediated interactions. These environments can foster belonging but also create misinformation, comparison pressures, or emotional fragmentation [
9,
15].
To build emotionally sustainable technological cultures, communities should promote:
Collective emotional literacy, enabling shared reflection on technology’s role in shaping feelings and relationships.
Community norms that prioritize empathy, cooperation, and ethical digital coexistence.
Practices that recognize both the opportunities and vulnerabilities inherent in AI-mediated social life.
Ultimately, societal well-being requires developing a mature emotional culture capable of integrating AI without sacrificing human meaning, dignity, and relational depth [
49].
5.6. Closing Synthesis
Together, these practical implications demonstrate that the challenges and opportunities of AI-mediated emotional well-being are best addressed through integrated, multilevel, and ethically grounded strategies, consistent with the conceptual model developed in this study. Emotional well-being in AI-driven societies depends on:
Only through such coordinated efforts can AI contribute to healthier, fairer, and more human-centered emotional ecosystems.
6. Limitations and Future Research Lines
This study is framed within the historical period from 2020 to 2025, a phase characterized by unprecedented acceleration in the development and adoption of artificial intelligence (AI) technologies. Although this time frame is suitable to capture the most recent transformations in the relationship between AI and emotional well-being, it also entails an inherent limitation: the speed of technological change exceeds the academic capacity to construct stable theoretical frameworks. For this reason, the findings should be interpreted as an analytical snapshot of a rapidly evolving phenomenon rather than as a definitive synthesis.
A second limitation derives from the theoretical and reflective nature of the adopted approach. This article does not present direct empirical data; instead, it offers a critical review supported by interdisciplinary scientific literature. While this strategy allows for identifying emerging trends, psychosocial tensions, and ethical dilemmas, it limits the ability to quantify the effects of AI on specific variables such as anxiety, empathy, or subjective well-being. Consequently, future studies should incorporate mixed methodologies—quantitative, qualitative, and experimental—to triangulate empirical and conceptual evidence and deepen our understanding of how algorithmic mediation influences emotional health.
Contextual heterogeneity represents another significant limitation. The use, acceptance, and impact of AI tools—such as mHealth systems, virtual assistants, or therapeutic chatbots—vary widely across cultural, socioeconomic, and technological settings. These differences shape experiences of digital well-being and restrict the generalizability of results. Therefore, it is essential to advance toward comparative and cross-cultural research that examines how educational, social, and economic factors modulate the relationship between AI and emotional well-being. Particular attention should be given to perspectives from the Global South, where infrastructure gaps and regulatory limitations create emotionally vulnerable scenarios that remain underexplored [
50,
51,
52].
In addition, most of the reviewed studies rely on short-term observations or controlled environments. This lack of longitudinal evidence limits the understanding of how sustained interaction with AI reshapes attachment patterns, emotional autonomy, or perceptions of social support over time. Future research should therefore examine the long-term evolution of these dynamics to distinguish between healthy emotional adaptation and affective dependency on intelligent systems.
The findings of this study—summarized in the three-level conceptual model (technological–structural, psychosocial–relational, and ethical–existential)—suggest several specific avenues for future research:
Mechanisms of Algorithmic Affectivity: Future studies should investigate how AI systems detect, categorize, and modulate emotions, and how these mechanisms influence the psychosocial–relational level identified in this review. Understanding these processes is essential to prevent automated emotional manipulation and to refine ethical principles of fairness and transparency.
Digital Emotional Identity and the Construction of the Algorithmic Self: Results show that emotional expression in digital environments is increasingly shaped by algorithmic infrastructures. Research should therefore examine how individuals build, perform, and negotiate emotions in mediated contexts and how these processes affect authenticity, trust, and identity formation over time.
AI-Assisted Emotional Regulation and the Autonomy–Dependence Continuum: Given the ambivalence observed in the literature—AI can either enhance self-regulation or foster dependence—future research should specify under which conditions AI-assisted regulation supports emotional autonomy and when it risks substituting or diminishing introspective capacities.
Longitudinal Impacts of Hybrid Emotional Ecosystems: The conceptual model emphasizes the circular dynamic through which humans feed algorithms and algorithms shape human emotional experience. Longitudinal research is needed to understand how this loop evolves, particularly in domains highlighted in the findings: mental health, organizational well-being, and education.
Cultural and Structural Moderators in AI-mediated Well-being: Considering the contextual variability documented in the results, cross-cultural studies should identify structural moderators—such as technological literacy, socioeconomic inequality, and cultural norms—that condition whether AI operates as a facilitator or disruptor of emotional well-being.
Ethical Architecture of Emotional Data: As several studies highlighted persistent dilemmas regarding affective privacy, algorithmic opacity, and emotional dignity, future research should develop frameworks capable of evaluating how emotional data are collected, interpreted, and used. This includes exploring regulatory strategies inspired by standards such as IEEE 7010 [
33].
Relational Dynamics in Human–AI Emotional Interaction: The review shows tensions between simulated empathy, connected loneliness, and absent presence. Future studies should analyze these relational paradoxes in depth, examining how empathy, trust, and perceived support are redistributed within hybrid environments where human and non-human actors coexist.
Altogether, these lines of research advance a more specific, contextualized, and conceptually grounded agenda for future studies on emotional well-being in the age of artificial intelligence. The central challenge will be to construct integrative models that capture not only the technological effects of AI but also the cultural, moral, and affective dimensions that shape human experience in increasingly algorithmic societies.
7. Conclusions
The analysis of literature published between 2020 and 2025 shows that artificial intelligence (AI) has become a central mediator of contemporary emotional life, not simply by expanding technological capabilities but by reshaping how individuals perceive, regulate, and express their emotions across clinical, organizational, and educational contexts. The findings confirm that AI is no longer a peripheral tool but an emergent relational and affective agent—one that participates in the co-construction of well-being, vulnerability, and emotional meaning in digital environments.
The main contribution of this study lies in proposing a holistic and theoretically integrated understanding of AI-mediated emotional well-being, articulated through the three-level conceptual model (technological–structural, psychosocial–relational, and ethical–existential). This model highlights that emotional well-being is neither a purely individual state nor a purely technological outcome; instead, it emerges from the dynamic, circular, and bidirectional interactions between human experiences and algorithmic systems. The reviewed evidence shows that AI shapes emotional experience by organizing infrastructures of support, mediating relational dynamics, and generating new ethical dilemmas around autonomy, authenticity, and affective privacy.
Across domains, the findings reveal that AI functions as an ambivalent catalyst. It enables greater accessibility to psychological resources [
30,
31], fosters creativity and adaptive learning [
1,
34], and supports emotional regulation and monitoring [
12]. Yet, it also introduces vulnerabilities: affective dependence [
6], depersonalization in work and care contexts [
5,
19], algorithmic fatigue, and forms of emotional outsourcing that may erode spontaneity or reflective awareness [
4,
7]. This duality underscores that digital well-being depends not only on system performance but on the ethical and relational design choices embedded in AI.
A central insight of this reflection is that fragmented approaches to AI and well-being are insufficient. Clinical, organizational, and educational studies tend to conceptualize emotional well-being in isolation, often focusing either on functional benefits or on risks of dehumanization. By integrating these perspectives into a unified psychosocial and ethical framework, this article demonstrates that emotional well-being in the algorithmic age is a hybrid construct, shaped by technological infrastructures, relational interactions, and moral expectations. This integration responds directly to the knowledge gap identified in the Introduction section and offers a conceptual foundation for understanding the paradoxes that characterize AI-mediated emotional life: increased support alongside increased dependency, enhanced personalization alongside diminished authenticity.
The findings also reinforce the importance of interdisciplinary dialogue, connecting psychology, AI ethics, human–computer interaction, sociology, and organizational behavior. Understanding AI as part of an emerging emotional ecosystem requires approaches capable of examining not only emotional outcomes but also the cultural, relational, and moral conditions that make such outcomes possible. The role of AI in emotional life cannot be fully understood through technological determinism or moral alarmism; instead, it demands a critical, contextual, and relational perspective.
Ultimately, this article argues that the key question for the future is not whether AI improves or harms emotional well-being, but under what conditions it contributes to autonomy, authenticity, and psychological flourishing. Sustainable digital well-being requires designing technologies with human intentionality, fostering ethical and emotionally conscious leadership, reducing structural inequities in access and literacy, and strengthening individuals’ capacity for reflective, autonomous engagement with intelligent systems.
In conclusion, preserving the human dimension in the algorithmic age does not entail rejecting AI but learning to coexist with it responsibly—ensuring that technological innovation serves emotional life rather than shaping it uncritically. AI can become a vehicle for emotional development, support, and resilience, but this potential will only be realized if societies cultivate the ethical, cultural, and relational maturity necessary to guide its evolution. The challenge is collective: to design, regulate, and inhabit AI-mediated environments in ways that protect emotional dignity and enhance, rather than diminish, what fundamentally defines us as human beings—the capacity to feel, understand, and care for others.