Previous Article in Journal
Perceptions of the Sexual and Reproductive Rights of Indigenous Women in Northern Colombia
Previous Article in Special Issue
Descriptive Study on State and Trait Anxiety Levels in University Students and Their Potential Influencing Factors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence and the Reconfiguration of Emotional Well-Being (2020–2025): A Critical Reflection

by
Carlos Santiago-Torner
*,
José-Antonio Corral-Marfil
and
Elisenda Tarrats-Pons
Department of Economics and Business, Faculty of Business and Communication Studies, University of Vic—Central University of Catalonia, 08500 Vic, Spain
*
Author to whom correspondence should be addressed.
Societies 2026, 16(1), 6; https://doi.org/10.3390/soc16010006 (registering DOI)
Submission received: 28 October 2025 / Revised: 15 December 2025 / Accepted: 18 December 2025 / Published: 23 December 2025

Abstract

Between 2020 and 2025, rapid advances in artificial intelligence (AI) reshaped how individuals access emotional support, express feelings, and build interpersonal trust. This article offers a critical reflection—based on an analytical review of 40 peer-reviewed studies—on the psychosocial, ethical, and sociotechnical tensions that characterize AI-mediated emotional well-being. We document both opportunities (expanded access to support, personalization, and early detection) and risks (simulated empathy, affective dependence, algorithmic fatigue, and erosion of relational authenticity). Methodologically, we applied a three-phase critical review: exploratory reading, thematic clustering, and interpretive synthesis; sources were retrieved from Scopus, Web of Science and PsycINFO and filtered by relevance, methodological rigor, and topical fit. We propose a conceptual model integrating three interdependent levels—technological–structural, psychosocial–relational, and ethical–existential—and argue for a sociotechnical perspective that recognizes AI as a co-constitutive actor in emotional ecologies. The article closes with targeted research agendas and policy recommendations to foster human-centered AI that preserves emotional autonomy and equity.

1. Introduction

Between 2020 and 2025, the accelerated expansion of artificial intelligence (AI) profoundly reshaped the ways people work, learn, communicate, and manage their emotional well-being. Advances in machine learning, natural language processing, and generative systems have made AI an integral part of everyday life [1,2]. This integration has redefined human interaction patterns and the very notion of psychological balance within a society increasingly mediated by algorithms [3,4]. However, alongside promises of efficiency, personalization, and emotional assistance, new ethical and psychosocial questions have emerged regarding AI’s effects on identity, autonomy, and the quality of affective bonds [5,6,7]. Although the temporal frame “2020–2025” might appear arbitrary at first sight, it corresponds to a period marked by (a) the global digital acceleration triggered by COVID-19, (b) the mass adoption of conversational agents and generative AI, and (c) the consolidation of empirical studies that allow for a coherent synthesis of psychosocial trends. This interval therefore provides a conceptually meaningful window to examine how AI-mediated emotional experiences evolved during a phase of unprecedented technological uptake.
From a psychosocial perspective, the advent of AI has altered the dynamics of human relationships, shaping both perceptions of social support and experiences of loneliness. The so-called “emotional hyperconnectivity” has partly replaced face-to-face encounters with digital interactions, in which emotions are interpreted, classified, or even generated by automated systems [8,9]. While this transformation has expanded access to support resources and reduced geographical and economic barriers [10,11,12], it has also raised concerns about depersonalization and the erosion of emotional authenticity [5,13,14].
The emergence of generative AI tools has intensified the tension between autonomy and technological dependence. Interactive AI systems and self-care applications offer immediacy, privacy, and assistance—qualities particularly appealing to those seeking emotional support [1,3]. Yet these same features can foster excessive reliance on technology for emotional decision-making, encouraging dependency on devices and gradually replacing human contact with digital bonds [6,15]. This phenomenon illustrates what has been termed the “digital affective paradox”: as technological connectivity increases, people do not necessarily feel more connected. In fact, the more they are accompanied by digital systems, the harder it becomes to sustain genuine relationships and a stable sense of belonging [7,9].
Although recent literature on AI and well-being has grown considerably, most studies focus on clinical or functional outcomes—such as stress reduction, learning enhancement, or work efficiency [16,17,18]. Few, however, examine the relational, identity-related, and ethical implications of coexisting with intelligent systems [1,2]. This reveals a more precise knowledge gap: current research lacks integrative frameworks capable of explaining the constitutive tensions that arise when humans and AI systems co-shape affective experiences. Existing studies describe benefits and risks, but they rarely articulate how technological infrastructures, psychosocial processes, and ethical dilemmas interact dynamically. In particular, socio-technical and socio-material perspectives—central for understanding human–technology co-constitution—remain underutilized in the emotional well-being literature.
Moreover, existing studies tend to approach the relationship between AI and well-being from isolated disciplinary domains—clinical, educational, or organizational—without exploring their intersections [3,4]. While mental health research highlights AI’s therapeutic potential and its ability to broaden access to care [5,12], organizational studies warn of digital stress and loss of meaning at work [19,20]. This lack of integration constrains a comprehensive understanding of AI’s impact on human well-being and hinders the development of ethical policies that balance innovation with humanization [2,21]. Thus, the gap is not merely empirical but conceptual: we lack a model that situates emotional well-being within a multilayered system where AI operates simultaneously as a technological infrastructure, a relational mediator, and an ethical actor.
It is therefore essential to promote a critical reflection that transcends technological enthusiasm and focuses on the relational nature of emotional well-being [4,22]. Understanding AI’s impact from this lens entails recognizing that well-being depends not merely on access to digital tools but on the quality of human experiences those tools facilitate or transform [9]. In this sense, AI should be understood not as a functional mechanism but as a relational actor actively shaping emotions, values, and contemporary social interactions [5,6].
The pandemic and post-pandemic contexts further accelerated this emotional-technological integration [16]. During the COVID-19 pandemic, the massive use of digital platforms and virtual psychological support assistants demonstrated both the therapeutic potential of technology and its ethical limitations [12,23]. The experiences accumulated between 2020 and 2025 reveal a significant evolution in how individuals experience intimacy, vulnerability, and emotional companionship in AI-mediated environments [7,9]. This period has therefore become an analytically coherent corpus from which to examine emerging psychosocial patterns, rather than a merely chronological delimitation. Yet, an integrative understanding capable of articulating these insights from an interdisciplinary and ethical standpoint remains scarce [2,21].
Against this backdrop, the present article offers a critical reflection on the psychosocial impacts of artificial intelligence on emotional well-being, drawing upon studies published between 2020 and 2025 in journals ranked in Q1–Q3 of Journal Citation Reports (JCR) and Scimago Journal Rank (SJR). Its aim is to analyze the tensions, paradoxes, and ethical dilemmas arising from the intensive use of intelligent technologies, acknowledging both their benefits and the risks they pose to collective psychological health [5,22].
Furthermore, this work seeks to bridge the gap between clinical and technological approaches through a relational and contextual perspective that situates emotional well-being at the core of the debate on AI [2,4]. It assumes that well-being is a hybrid construct shaped by the continuous interaction between human and artificial intelligence, in which technology can either mitigate or intensify distress [9,21]. Finally, the study invites an interdisciplinary dialogue aimed at fostering a more conscious, ethical, and emotionally sustainable relationship with AI, recognizing that human well-being largely depends on the quality of both human and digital relationships shaping contemporary emotional life [1,6].

Methodology

This study adopts a critical reflection approach, grounded in an analytical review of scientific literature published between 2020 and 2025 on the psychosocial impact of artificial intelligence (AI) on emotional well-being. This approach is particularly suitable when the purpose is not to measure or empirically verify a phenomenon but to interpret, contextualize, and reconstruct it theoretically. As Booth et al. [24] and Torraco [25] argue, critical reviews aim to integrate findings from diverse disciplines and perspectives to achieve a deeper understanding of complex and emerging phenomena.
The choice of this design responds to the dynamic nature of AI, whose social and psychological effects evolve faster than empirical validation can capture. Consequently, this approach moves beyond the mere description of results to focus on meaning-making processes and conceptual tensions arising from the relationship between technology and emotional well-being. According to Snyder [26], critical reviews contribute to identifying knowledge gaps, examining implicit assumptions, and proposing new interpretive frameworks for future research.
Unlike systematic review models or protocols such as PRISMA—which seek comprehensiveness and replicability—the critical reflection approach prioritizes conceptual depth, argumentative coherence, and the capacity for synthesis as indicators of academic rigor. Rather than generating closed conclusions, this type of review opens questions and expands interpretive frameworks from an interdisciplinary standpoint.
The corpus analyzed includes 40 scientific publications selected for their theoretical relevance, quality, and currency. Sources were extracted from Scopus, Web of Science, and PsycINFO, restricting the search to journals indexed in Q1 to Q3 of the SJR or JCR systems. Inclusion criteria considered: (a) thematic relevance—i.e., the relationship between AI and emotional well-being or mental health; (b) temporal relevance (2020–2025); and (c) scientific solidity—ensured through peer review and publication in recognized academic outlets.
To ensure methodological transparency, the search strategy incorporated explicit Boolean strings. The core search formula was: (“artificial intelligence” OR “AI” OR “machine learning” OR “chatbot” OR “conversational agent” OR “generative AI”) AND (“emotional well-being” OR “emotional wellbeing” OR “mental health” OR “emotional support” OR “affective”). Filters included: publication years 2020–2025; language = English; document type = articles or reviews; inclusion only of journals indexed in JCR or SJR (Q1–Q3). The searches were conducted in Scopus, Web of Science, and PsycINFO between February and March 2025. These parameters guaranteed conceptual coherence and methodological robustness.
The selection process followed a structured sequence summarized in Figure 1 (Flow Diagram). After removing duplicates, titles and abstracts were screened for topical alignment; full texts were then evaluated based on inclusion and exclusion criteria. Studies were excluded if they (a) lacked direct relevance to emotional well-being; (b) examined AI only from technical or engineering perspectives; (c) fell outside the 2020–2025-time frame; or (d) did not meet peer-review and indexation criteria. The final sample comprised 40 articles that met all eligibility criteria.
The review process unfolded in three phases: an exploratory reading to identify preliminary categories, a thematic clustering of findings around positive impacts, ethical dilemmas, and emerging challenges, and an interpretive synthesis that connected empirical evidence with conceptual frameworks.
During the exploratory phase, preliminary categories were defined inductively. Two independent readings of approximately 15% of the corpus were used to generate initial codes such as “AI-mediated emotional support,” “erosion of authenticity,” “algorithmic dependence,” and “relational displacement.” These preliminary categories provided the foundation for the subsequent thematic clustering. The clustering phase grouped studies based on converging patterns at cognitive, affective, relational, and ethical levels. This process made it possible to identify latent tensions—particularly the contradiction between expanded technological accessibility and diminished emotional authenticity—consistently highlighted in the recent literature.
Finally, the interpretive synthesis involved triangulating findings with psychosocial, ethical, and sociotechnical perspectives. This stage sought not only to summarize evidence but to articulate the constitutive tensions that emerge when humans and AI systems co-shape emotional experiences. Interpretive synthesis was supported by constant comparison techniques, allowing the contrasting of conceptual approaches and the identification of emerging theoretical gaps.
Although it does not conform to formal protocols like PRISMA, the study-maintained standards of rigor, transparency, and internal coherence to ensure scientific validity. Consistency between objectives, theoretical framework, and conclusions was guaranteed through conceptual triangulation, that is, the systematic comparison of arguments and evidence from different authors to identify convergences, contradictions, and theoretical gaps [24].
A reflexive and self-critical stance was also adopted regarding the researcher’s role as a mediator of knowledge. Recognizing the limits of interpretation—particularly in a rapidly evolving field—is essential to avoid confirmation bias or technocentric perspectives. Consequently, this critical review does not aim to replace systematic evidence but to complement it, offering a more integrative and human perspective on the psychological and social changes linked to technological expansion.
In summary, the methodology ensures a balance between academic rigor and interpretive flexibility, situating the debate on AI and emotional well-being within an interdisciplinary framework of reflection. Its value lies in generating renewed readings of the phenomenon, identifying conceptual gaps, and contributing to a more ethical, critical, and emotionally sustainable understanding of the relationship between artificial intelligence and human experience.
Figure 1 illustrates the full selection process in six sequential stages, enhancing transparency and facilitating replicability.

2. Psychosocial Impact of Artificial Intelligence on Emotional Well-Being: Review and Critical Analysis

2.1. State of the Art and Theoretical Gap

Over the past five years, research on artificial intelligence (AI) and emotional well-being has expanded across clinical psychology, organizational behavior, human–computer interaction, and educational technology. Existing reviews have examined AI-supported mental-health interventions, the effectiveness of conversational agents, human–AI trust, ethical tensions, and the psychosocial consequences of AI integration in work and learning environments. These studies collectively show that AI increasingly mediates emotional expression, interpersonal connection, and everyday self-regulation, while simultaneously raising concerns about dependence, depersonalization, and loss of authenticity. Despite this growth, current reviews tend to remain domain-specific—focusing either on clinical outcomes, organizational productivity, or learning processes—without offering an integrated psychosocial and ethical perspective. Moreover, theoretical grounding is often implicit: key frameworks such as human-centered AI, mediated social interaction, theories of dehumanization, and models of trust in automation are referenced unevenly and rarely synthesized into a coherent analytic structure.
This article addresses this gap by offering a cross-contextual, integrative framework that explains how AI systems shape emotional well-being across technological, relational, and existential dimensions. By consolidating insights from psychology, ethics, organizational studies, and education, the review advances a unified conceptual model that captures the paradoxical nature of AI-mediated emotional support: its capacity to expand access to care and connection while simultaneously introducing risks of emotional over-reliance, reduced self-regulation, and symbolic forms of dehumanization. In doing so, the article clarifies the mechanisms through which AI co-constructs emotional experiences and provides a theoretically grounded foundation for understanding its impact on contemporary psychosocial life.

2.2. Transformations in Emotional Experience and Human Relationships

Artificial intelligence (AI) has become one of the key mediators of contemporary emotional life, subtly intervening in how individuals experience and make sense of their inner states. Its presence in workplaces, educational settings, and domestic environments has not merely added a technological layer to daily routines; it has reshaped the architecture of emotional expression, influencing how people interpret, externalize, and negotiate feelings with others. More than a technical instrument, AI increasingly operates as a relational environment that conditions the rhythms of empathy, communication, and mutual recognition [1]. This shift reflects a broader sociotechnical reconfiguration in which emotional life becomes entangled with algorithmic systems that interpret, classify, and sometimes simulate affective cues.
One of the most notable transformations is the growing delegation of emotional labor to automated systems. Applications offering psychological companionship, virtual assistants that simulate attentive listening, and algorithms capable of recognizing moods through voice or facial expressions are redefining what users expect from emotional support [11]. These technologies promise constant availability, discretion, and immediacy—features that may be appealing in contexts of loneliness or stress. At the same time, their simulated empathy introduces an ambiguous form of relationality: interactions feel personal yet lack intentionality, reciprocity, and moral depth [8]. This paradox illustrates a broader tension between what AI appears to provide (understanding) and what it can actually offer (pattern-based approximation).
The rise in emotional hyperconnectivity intensifies this dynamic. Increasingly, individuals rely on digital platforms to express emotions or seek comfort, often replacing—or deprioritizing—face-to-face interactions [7]. While this shift may create a sense of immediate community, it also contributes to a fragmentation of emotional presence, where speed and responsiveness overshadow depth and reflection. As Alhuwaydi [13] and Gual-Montolio et al. [14] emphasize, when connection becomes synonymous with constant availability, traditional markers of emotional support—silence, physical proximity, shared temporality—lose symbolic value. This erosion suggests that AI-mediated communication does not merely alter emotional expression but transforms collective norms about what counts as genuine interpersonal engagement.
From a psychosocial perspective, AI alters not only the channels of affective communication but also the meanings and social valuation of emotions themselves. In many digital ecosystems, algorithms amplify content that triggers strong reactions, reinforcing an affective culture driven by immediacy, visibility, and polarization. This emphasis encourages emotional exposure and external validation, shaping self-perception in ways that affect psychological stability and relational authenticity [1]. Under these conditions, emotions risk becoming performative commodities rather than relational expressions rooted in shared experience.
The increasing sophistication of generative systems introduces further complexity. Some users develop emotionally meaningful bonds with AI assistants that provide a sense of understanding without judgment. These interactions may offer temporary relief but also reconfigure expectations about intimacy, introducing forms of “safe” yet asymmetrical emotional attachment that diverge from human relational norms [15]. This dynamic echoes what Velastegui-Hernández et al. [7] identify as simulated empathy—responses that mimic human sensitivity but lack existential authenticity and embodied resonance. Over time, these mediated intimacies may alter how individuals evaluate vulnerability, trust, and emotional reciprocity in their human relationships. Taken together, these transformations reveal that AI not only expands possibilities for emotional interaction but also redefines the cultural and relational grammar of companionship, care, and affective understanding. As boundaries between human and technological agents become increasingly blurred, critical questions emerge regarding how empathy, intimacy, and emotional well-being are being reconstructed. Understanding these dynamics is essential to evaluate how AI is reshaping not only behaviors and communication patterns but also the very texture of human sensitivity and the capacity to sustain meaningful connections in a profoundly digitalized social world.

2.3. Psychological Well-Being and the Paradoxes of Technological Companionship

The relationship between AI and psychological well-being has become one of the central debates of the past decade, not only because intelligent systems have entered clinical, educational, and organizational domains, but because they now mediate how individuals regulate emotions, relate to others, and interpret their own vulnerabilities. While these technologies offer unprecedented therapeutic opportunities, they also generate ethical, relational, and psychosocial dilemmas that require a careful and nuanced analysis [27,28,29]. From a sociotechnical perspective, AI does not merely support well-being; it actively reshapes the environments, expectations, and relational structures through which well-being is constructed.

2.3.1. The Therapeutic Dimension of Digital Companionship

A growing body of research highlights the potential of AI to expand access to psychological support and enhance the effectiveness of interventions. Virtual assistants, mobile applications, and machine learning algorithms enable continuous emotional monitoring, early detection of symptoms, and tailored coping strategies [14,30]. During the COVID-19 pandemic, the global rise of mHealth tools revealed the capacity of AI to act as an affective bridge between clinical expertise and everyday life, providing scalable emotional companionship while facilitating large-scale surveillance of psychological states [31]. These advances illustrate how AI becomes embedded in the affective infrastructures of daily routines, blurring distinctions between clinical care, personal self-regulation, and consumer technology.
AI-based emotional support systems have shown particular efficacy in mood regulation and patient follow-up [21,32]. By translating behavioral, linguistic, or physiological data into personalized feedback, these tools strengthen emotional self-efficacy and promote reflective understanding of one’s mental states [22]. Schiff et al. [33] consider that ethical frameworks such as IEEE 7010 have further consolidated the evaluation of technological impacts on well-being, reinforcing the notion that AI can contribute to collective mental health when designed and deployed responsibly. In educational settings, AI has also been linked to increases in creativity, motivation, and positive affect when embedded in supportive pedagogical ecosystems [1,34].
Despite these benefits, recent studies caution against idealizing AI as a neutral or purely supportive tool. Authors such as Alhuwaydi [13] and Beg et al. [28] conceptualize AI as an “emotional amplifier”—a system that can enhance human care through availability and simulated empathy, but that also introduces new forms of dependence and algorithmic mediation. In sociotechnical terms, this means that therapeutic support becomes co-produced by human intentions and machine logics, resulting in hybrid companionships that challenge traditional notions of empathy, vulnerability, and psychological safety. Furthermore, AI-mediated emotional support does not replace the human bond but transforms it. Rather than functioning as a supplementary aid, AI increasingly becomes a relational co-actor that shapes expectations about immediacy, emotional response, and the very meaning of being “understood.” This hybridization redefines psychological accompaniment within environments where human and algorithmic agents coexist, negotiate roles, and co-construct emotional norms.

2.3.2. The Paradoxes of Technological Support

Despite its promise, the digital mediation of well-being introduces structural contradictions that complicate AI’s role in psychological care. One of the most prominent tensions is the “paradox of artificial empathy,” in which users report comfort, support, and emotional resonance despite being fully aware that the system lacks genuine intentionality [8]. This coexistence of felt proximity and ontological distance invites a deeper evaluation of authenticity in digital care. From a sociotechnical perspective, artificial empathy constitutes not merely an illusion of understanding but a reconfiguration of the norms that define what counts as emotionally meaningful in technologically saturated environments. As AI becomes increasingly embedded in psychological assistance, it risks normalizing a “soft dehumanization” of care, where well-being is assessed in terms of efficiency, immediacy, and automated responsiveness rather than relational depth [35]. This shift subtly redefines expectations: emotional support becomes something to be delivered rather than co-constructed, thereby weakening the reciprocal foundations of human empathy.
Another paradox emerges through algorithmic fatigue—the cognitive, attentional, and emotional overload resulting from constant interaction with intelligent systems [19,20]. In organizational contexts, this fatigue undermines autonomy and satisfaction [17,18], while in educational settings it fosters anxiety, hypervigilance, or dependency on automated feedback [7,36]. Here, AI serves as both a facilitator and stressor, illustrating the ambivalent emotional ecology that characterizes technology-mediated environments. Relational well-being is similarly reshaped. Although digital companionship expands opportunities for connection, it may also generate forms of “connected loneliness,” where interaction is present but meaningfully diminished [11]. This “absent presence” reveals the fragility of intimacy when emotional exchanges are guided by algorithms rather than lived experience. As Makridis and Mishra [16] emphasize, AI’s capacity to strengthen or erode relational well-being depends less on its technical features than on the cultural, ethical, and institutional contexts that shape its use. In this sense, AI does not simply mediate relationships—it co-produces the conditions under which relational depth becomes possible or is progressively diluted.

2.3.3. Toward an Integrative Understanding of AI-Mediated Well-Being

The impact of AI on psychological well-being cannot be reduced to a binary of benefits versus risks. Rather, it unfolds along a continuum of mediated emotional experiences in which well-being is negotiated between technological autonomy, human vulnerability, and the sociocultural frameworks surrounding digital care [7]. This integrative lens aligns with Bankins et al. [27], who conceptualize AI as a relational actor capable of shaping emotions, perceptions, and behavioral patterns within wider social systems [37]. Such an approach moves beyond instrumental views of technology and positions AI within the moral and affective architecture of everyday life.
From this standpoint, technological companionship is simultaneously a cultural artifact and a psychological process. AI reshapes the meanings of care, attention, and presence by simulating recognition, responsiveness, and emotional attunement. However, these simulations coexist with human needs for authenticity, mutuality, and embodied experience. Thus, the challenge is not to reject AI-mediated companionship but to understand how it modifies the boundaries of emotional dependence and self-regulation.
Well-being in the algorithmic era requires cultivating the ability to coexist with systems that mimic empathy while preserving the centrality of human connection. This involves critical emotional literacy, ethical awareness, and organizational frameworks that ensure technology supports rather than supplants the relational foundations of mental health [8]. In educational, clinical, and workplace settings alike, the aim should be to integrate AI in ways that empower individuals without compromising their autonomy or their capacity for genuine interpersonal engagement.
In sum, AI opens an ambivalent horizon: it democratizes psychological assistance while exposing individuals to risks of emotional superficiality, dependency, and ontological confusion. Understanding this tension demands an interdisciplinary and ethically grounded approach capable of balancing technological efficiency with humaneness, ensuring that emotional well-being remains anchored in dignity, authenticity, and relational depth.

2.4. Psychosocial and Ethical Tensions of Emotional Well-Being in the Age of Artificial Intelligence

Between 2020 and 2025, the accelerated expansion of AI reshaped how individuals understand, manage, and experience emotional well-being across clinical, educational, organizational, and everyday contexts. Although this period exposed the therapeutic potential of AI—especially during the COVID-19 pandemic—it also illuminated emerging ethical and psychosocial dilemmas. Emotional well-being, commonly conceptualized as a dynamic balance between environmental demands and personal resources, is increasingly negotiated within technological ecosystems that introduce tensions between autonomy and algorithmic influence, personalization and digital standardization, and virtual connectedness and emotional solitude. Rather than representing isolated problems, these tensions form a sociotechnical landscape in which emotional life is continually mediated and reconfigured.
During the pandemic, mHealth applications provided early evidence of AI’s ability to sustain psychological stability in moments of uncertainty. Through machine learning algorithms, these tools monitored mood states, offered guidance, and reduced isolation through responsive digital interactions [31]. However, the same interfaces that provided companionship partially displaced human bonds, translating emotional support into algorithmic terms. Studies by Beg et al. [28] and Olawade et al. [32] describe the rise in hybrid models in which chatbots and cognitive assistants functioned as initial interlocutors—broadening access to care yet raising concerns about the authenticity, depth, and moral grounding of digitally mediated empathy. This shift illustrates not only technological substitution but a cultural redefinition of what “care” and “presence” mean in emotionally demanding contexts.
AI’s emotional influence extended beyond clinical environments. In organizational settings, AI-driven performance management generated ambivalent effects: improved efficiency and reduced cognitive load [17,18] were accompanied by diminished meaning, relational disconnection, and emotional fatigue when interactions became excessively automated [19,20]. These findings suggest that technological mediation can either empower or erode psychological well-being depending on how it intersects with work culture, managerial norms, and employees’ emotional expectations. Likewise, the tension between support and depersonalization became increasingly evident as AI blurred the boundary between assistance and control.
In educational contexts, AI-based applications influenced learners’ emotional and motivational processes. Research by Dai et al. [34] and Lin and Chen [1] highlights positive effects on creativity and emotional self-efficacy when algorithms operate transparently and promote authentic engagement. However, other studies caution that overreliance on AI-mediated feedback may weaken emotional self-regulation and foster a superficial engagement with learning [7,36]. This ambivalence reflects a broader tension: AI can scaffold emotional development, yet it may simultaneously displace the reflective and relational experiences necessary for genuine emotional growth.
At a structural and societal level, AI functions as an invisible actor shaping social well-being. Makridis and Mishra [16] argue that the rise of “artificial intelligence as a service” generates economic gains without necessarily improving subjective well-being, as it tends to reproduce existing digital inequalities. Moghayedi et al. [11] demonstrate that in industries such as facilities management in South Africa, AI adoption produces both opportunities for inclusion and new forms of precarity. These findings underscore that emotional well-being is not simply an individual psychological state but a socially mediated phenomenon embedded in power dynamics, technological access, and digital divides [38]. Thus, well-being in the age of AI requires acknowledging that emotional experiences are entangled with broader socio-economic and technological structures.
Ethical concerns constitute another major source of tension. Algorithmic assessments of well-being raise issues related to emotional privacy, data transparency, and the right to disconnect from affective surveillance. IEEE 7010 [33] represented a step forward by incorporating principles of fairness and autonomy into AI evaluation. Yet, technological evolution has advanced more rapidly than regulatory frameworks, creating what Jeyaraman et al. [35] call an “ethical enigma”: a widening gap between technical innovation and moral adaptation. This gap increases the risk of emotional manipulation, misuse of sensitive affective data, and confusion between automated care and human relationality. In this sense, the ethical challenge is not merely regulatory but epistemic: societies must reconsider how they define emotional authenticity, responsibility, and consent in technologically mediated environments.
Overall, the literature demonstrates that emotional well-being in the age of AI is relationally and ethically complex. Intelligent systems enhance individuals’ capacity to recognize and regulate emotions [21,22], yet they also foster dependencies that externalize the pursuit of emotional validation. This duality reveals that well-being is increasingly co-created between humans and machines—a hybrid emotional ecology in which agency, authenticity, and affective depth are continually renegotiated.
In summary, between 2020 and 2025, AI acted simultaneously as a catalyst and a disruptor of global emotional well-being. Advances in digital mental health, emotional education, and workplace well-being expanded the reach of psychological support, yet also exposed risks related to emotional superficiality, dependency, diminished empathy, and unequal access to digital resources. The central challenge moving forward is not technological sophistication but the collective ability to integrate AI into an ethical and humanistic framework that preserves the authenticity of emotional experience and protects the conditions under which relational well-being can flourish.

2.5. Conceptual Model of Artificial Intelligence (AI)-Mediated Emotional Well-Being

The proposed conceptual model synthesizes the core relationships identified in the reviewed literature between artificial intelligence (AI) and emotional well-being during the 2020–2025 period. Rather than treating AI as an isolated technological artifact, the model conceptualizes it as a multilevel sociotechnical system that operates simultaneously as a technological mediator, relational actor, and emotional modulator. These functions unfold across three interdependent levels—technological–structural, psychosocial–relational, and ethical–existential—whose interactions shape the ambivalent emotional ecology of contemporary digital life. Figure 2, included below, visually represents these three levels and illustrates the bidirectional flow through which technological infrastructures, relational processes, and ethical–existential dynamics continuously influence one another.

2.5.1. Technological–Structural Level

The technological–structural level encompasses algorithmic systems, chatbots, mHealth platforms, and intelligent digital environments that mediate emotional experience. These systems function as “affective infrastructures” that expand access to psychological support, facilitate emotional self-regulation, and create new spaces for digital companionship [1,22,31].
At this level, technology sets the conditions of emotional possibility: how often users interact, what types of emotional data are collected, and how feedback loops reinforce patterns of dependency, autonomy, or vulnerability. This layer thus determines the structural capacities—and limitations—through which emotional well-being can be mediated by AI.

2.5.2. Psychosocial–Relational Level

The psychosocial–relational level describes the interaction processes between users and AI systems, where the main paradoxes of digital well-being emerge. Central phenomena such as simulated empathy, connected loneliness, and absent presence reveal how AI redefines the meanings of intimacy, trust, and emotional authenticity [7,8].
In this layer, technology becomes a relational co-actor, shaping expectations about immediacy, responsiveness, and emotional resonance. Interactions with AI may temporarily alleviate loneliness or stress, yet they can also blur interpersonal boundaries, diminish embodied presence, and weaken the capacity for meaningful relational engagement. This level captures the fluid and often contradictory emotional experiences that arise when humans and algorithms co-construct affective environments.

2.5.3. Ethical–Existential Level

The ethical–existential level integrates the dilemmas that emerge from the algorithmic mediation of emotional well-being, including autonomy, emotional privacy, and the preservation of humanity in contexts of digital care. This level reflects the tension between efforts to embed ethical design principles into AI [33] and the growing concerns about the dehumanization of psychological support and emotional surveillance [19,35]. Here, emotional well-being is framed as a moral and existential question: 1. How much control should be delegated to machines? 2. What forms of emotional data should remain private? 3. What does it mean to be cared for in a digital society?
This dimension reveals the societal consequences of integrating AI into affective life and highlights the normative choices needed to sustain humane emotional ecosystems.

2.5.4. Implications of the Model

Taken together, these three levels form a circular and bidirectional system in which human experiences feed algorithms, and algorithms reshape perceptions, affective patterns, and social relationships. AI-mediated emotional well-being emerges not as a linear cause–effect relation but as a dynamic co-production, where benefits (access, personalization, constant support) coexist with latent risks (affective dependence, algorithmic fatigue, erosion of empathy). The model proposes that sustainable digital emotional well-being can only be achieved when three principles are balanced:
  • Technological humanization, which situates empathy and ethical intentionality at the core of algorithmic design.
  • Emotional autonomy, which promotes self-regulation and minimizes technological overdependence.
  • Transparency and fairness, which protect emotional privacy and ensure the responsible use of affective data.
This conceptual framework encourages viewing AI not as a solution or a threat but as a hybrid emotional ecosystem in which both human and algorithmic agents evolve in parallel. It allows recent literature to be interpreted as an interdisciplinary dialogue that reveals the constitutive tensions of digital emotional life.
By highlighting AI’s triple function—as infrastructure, relational agent, and emotional modulator—the model explains why the effects of AI on well-being are so diverse and often contradictory.
This theoretical foundation prepares the basis for the Section 3, which examines how these dynamics materialize across three domains—health, work, and education—and how each reflects the central paradox of the algorithmic age: the unstable balance between technological connection and emotional disconnection.

3. Discussion of Results

The analysis of the literature published between 2020 and 2025 reveals a profound transformation in the relationship between artificial intelligence (AI) and human emotional well-being. While the reviewed studies consistently show that AI has expanded resources for mental health support and emotional regulation, they also expose psychological, social, and ethical tensions that challenge conventional understandings of well-being. This discussion integrates both dimensions—opportunity and vulnerability—thereby addressing the central theoretical gap identified at the outset: the absence of a holistic and multilevel perspective that explains how AI can simultaneously function as a facilitator of emotional support and a source of emotional disruption. The multilevel conceptual model proposed in Figure 2 provides the framework through which these findings can be critically interpreted.

3.1. Opportunities and Ambivalences in Health Contexts

One of the clearest insights emerging from the literature is that AI has improved the accessibility and personalization of psychological support, particularly during the COVID-19 pandemic. Studies on mHealth applications [31] demonstrated that digital mediation sustained emotional stability in contexts of isolation, helping individuals reinterpret self-care as an interactive and technologically mediated practice [39]. Parallel evidence from Nashwan et al. [12] shows that mental health professionals—including psychiatric nurses—increasingly integrated AI tools into clinical workflows, generating hybrid care models that enhance monitoring and follow-up without replacing the human bond.
However, this same expansion of digital care introduces new ontological and ethical dilemmas. Sedlakova and Trachsel [5] question whether AI systems should be conceptualized solely as tools or as emerging moral actors, a concern echoed by Beg et al. [28] in their notion of “simulated empathy.” Within the logic of the model in Figure 2, these tensions reflect the psychosocial–relational level, where algorithmic responsiveness generates comfort yet simultaneously undermines emotional authenticity and interpersonal reciprocity. Thus, even where clinical benefits are clear, risks of affective dependence and conceptual confusion remain.

3.2. Contradictions in Organizational Settings

In organizational environments, AI’s impact on well-being is equally ambivalent. Research demonstrates that intelligent systems can reduce cognitive load, streamline tasks, and strengthen perceptions of efficacy and satisfaction [17,18]. Yet other studies [19,20] show that excessive automation dilutes professional identity and fosters depersonalization.
Uysal et al. [6] highlight how anthropomorphized AI assistants can establish emotional bonds with employees, functioning as an “affective Trojan horse” that blurs boundaries between collaboration and dependence. Through the lens of Figure 2, these dynamics reflect the interaction between the technological–structural level (algorithmic systems that shape workflows) and the psychosocial–relational level (emerging patterns of dependency, trust, and relational displacement). When organizations prioritize productivity over human connection, the result is a subtle erosion of autonomy and emotional meaning [40].

3.3. Emotional Implications in Educational Contexts

In the educational domain, AI demonstrates significant potential to enhance creativity, emotional self-regulation, and intrinsic motivation [1,34]. Studies by Pataranutaporn et al. [3] show that AI-generated characters can provide personalized support that fosters adaptive learning. Yet, as Tuomi [4] argues, this mediation risks shifting emotional management from students to algorithms, creating a model of “assisted emotional learning” where affective spontaneity diminishes.
Similarly, Velastegui-Hernández et al. [7] and Vistorte et al. [36] warn that overreliance on algorithmic feedback may cultivate an appearance of well-being that masks emotional fragility and dependence. Interpreted through Figure 2, these findings exemplify how educational AI systems operate at the intersection of structural design, relational engagement, and ethical deliberation—raising fundamental questions about how emotional authenticity can be preserved in digital learning ecosystems.

3.4. Structural Inequalities and Cultural Contexts

Structural and contextual factors significantly influence AI’s effects on well-being. Makridis and Mishra [16] argue that the economic growth associated with “artificial intelligence as a service” does not necessarily translate into emotional equity, often benefiting individuals with higher technological literacy. Moghayedi et al. [11] illuminate how AI in Global South workplaces generates both opportunities for inclusion and new forms of precarity. Convergently, Tornero-Costa et al. [23] highlight that research on AI and mental health often suffers from methodological limitations—such as narrow sampling frames and overreliance on quantitative indicators—that obscure cultural and affective diversity.
From the standpoint of Figure 2, these findings reflect the ethical–existential level, where questions of justice, access, and emotional dignity intersect with technological deployment. The literature thus calls for cross-cultural, context-sensitive approaches that acknowledge how digital well-being is shaped by technological power and social inequalities.

3.5. Ethical Vulnerabilities and Regulatory Gaps

The 2020–2025 period is marked by a growing mismatch between rapid innovation and regulatory capacity. Although IEEE 7010 [33] introduced important dimensions such as fairness, autonomy, and subjective satisfaction, persistent vulnerabilities remain. Jeyaraman et al. [35] and Sood & Gupta [8] highlight ongoing risks of emotional manipulation, algorithmic opacity, and affective dependence. In hospitality settings, Wang and Uysal [9] describe “AI-assisted mindfulness,” a practice that can promote self-regulation but may also replace human introspection with automated scripts, raising concerns about emotional authenticity.
The collection of affective data and the opacity of predictive models create what can be described—through the ethical–existential layer of Figure 2—as involuntary emotional exposure, a form of psychological vulnerability emerging from the digitalization of emotional life. The findings underscore the urgency of designing systems that preserve dignity, autonomy, and the right to emotional silence.

3.6. Integrating Evidence Through the Multilevel Model

Despite the risks, the reviewed studies agree that AI can contribute meaningfully to emotional well-being when embedded in ethical, human-centered frameworks. Dhimolea et al. [22] and Thakkar et al. [21] show that AI can strengthen emotional intelligence when interactions are transparent and supportive [41]. Ozmen Garibay et al. [8] identify six key challenges for human-centered AI, emphasizing emotional sensitivity and cultural inclusivity.
These insights converge with the multilevel model proposed in Figure 2, demonstrating that AI-mediated well-being cannot be understood in isolation but emerges from a dynamic interplay between technological infrastructures, relational dynamics, and ethical–existential considerations. Emotional well-being becomes a hybrid construct, co-created through the continuous interaction between humans and artificial systems.

3.7. Synthesis and Forward-Looking Considerations

Between 2020 and 2025, AI has consolidated its role as an “emergent emotional agent”—a system that does not feel yet shapes how individuals experience emotions, seek support, and construct meaning. This duality creates unprecedented opportunities (expanded mental health access, digital emotional competencies) and urgent challenges (emotional equity, authenticity, autonomy, regulatory reform).
Consistent with the theoretical model, the findings confirm that AI-mediated emotional well-being operates across the interconnected technological, psychosocial, and ethical levels. This framework not only integrates fragmented debates across health, organizational, and educational domains but also provides a foundation for future policies and research agendas aimed at developing a more human, conscious, and ethically grounded digital emotional ecosystem.

4. Theoretical Implications

The psychosocial analysis of artificial intelligence (AI) between 2020 and 2025 makes it possible to rethink the relationship between technology, emotions, and human well-being from an interdisciplinary standpoint. The findings show that approaches centered exclusively on technological efficiency or instrumental benefits are inadequate for understanding the phenomenon. Instead, it is necessary to advance toward theoretical models that integrate the emotional, relational, and ethical dimensions of human–machine interaction. Within this perspective, AI should be understood not merely as a functional tool but as a relational agent, one that actively participates in shaping social bonds, perceptions of support, and the construction of the digital self [1,7]. This interpretation aligns with the multi-level structure proposed in Figure 2, emphasizing that emotional experience in the digital era emerges from the interplay between technological infrastructures, psychosocial dynamics, and ethical meaning-making.
One of the central theoretical contributions of this study lies in putting forward an integrative perspective on AI-mediated emotional well-being, articulating three interdependent dimensions: psychological, social, and technological. The psychological dimension concerns the internal processes of self-regulation, emotional reflection, and the redefinition of self-perception generated through constant exposure to automated systems. The social dimension refers to the reconfiguration of interpersonal bonds, collective practices, and emerging forms of digital community. The technological dimension encompasses the algorithmic mechanisms that interpret, classify, and modulate affective states. This multi-layered framework acknowledges that contemporary emotional experience is not produced in isolation but instead unfolds in hybrid environments where human emotionality is partially coded, quantified, and reinterpreted by AI [8,15]. By synthesizing these three components, the study provides a conceptual foundation for future comparative research on the psychological impacts of AI and the relational sustainability of increasingly digitalized environments.
The study also contributes to the debate surrounding the “digital affective paradox,” defined as the persistent tension between emotional connection and emotional disconnection in hyperconnected societies. AI technologies amplify this paradox: while they expand companionship, guidance, and perceived support, they can simultaneously foster emotional dependence on non-human agents. Such ambivalence challenges classical notions of autonomy, presence, and authenticity, requiring renewed attention to concepts such as digital empathy—the capacity of technological interfaces to simulate emotional understanding—and authentic emotional well-being, understood as the alignment between subjective emotional experience and technological mediation [11,14]. From a theoretical standpoint, this tension underscores the need to examine AI not only as a cognitive or instrumental enhancer but also as a co-producer of emotional meaning.
Additionally, the study proposes expanding the conceptual vocabulary of well-being psychology through the notion of algorithmic emotional well-being, defined as the way algorithms filter, quantify, and modulate human emotions. This concept does not promote a deterministic or pessimistic view; rather, it invites reflection on how personalization mechanisms—embedded in self-care applications, mHealth platforms, and emotional support systems—are reshaping emotional balance and decision-making processes. During 2020–2025, the widespread adoption of these tools illustrated that technological mediation can empower self-care practices but may also reduce emotional autonomy depending on users’ levels of digital literacy and critical awareness [31]. Theoretically, this suggests that emotional well-being can no longer be analyzed without acknowledging the algorithmic architectures that sustain it.
From a sociotechnical standpoint, this work invites scholars to consider AI as a new social and affective actor, one that not only reflects human emotions but also shapes them by anticipating affective states, recommending behaviors, and generating simulated empathetic responses. Understanding AI therefore entails recognizing its agency within a shared emotional ecology where humans and intelligent systems co-construct meaning, expectations, and emotional norms.
Finally, this theoretical reflection reinforces the need for an interdisciplinary framework that brings together social psychology, technological ethics, and digital communication studies. Such integration is essential for explaining emerging phenomena such as emotional depersonalization, affective hyperconnectivity, and the delegation of emotional decisions to automated systems [42]. Collectively, these contributions help address the knowledge gap identified in this article by offering a robust conceptual foundation for future research on how AI reshapes emotional experience and the dynamics of well-being in digital societies.

5. Practical Implications

The practical implications derived from this critical review highlight that emotional well-being in the age of artificial intelligence (AI) depends not only on technological progress but, fundamentally, on the ethical, relational, and institutional choices guiding its design and use. Consistent with the three-level model proposed in Figure 2—technological–structural, psychosocial–relational, and ethical–existential—the findings reveal that AI-mediated emotional well-being requires coordinated actions from individuals, organizations, public institutions, and digital communities. Each level interacts with the others and generates both opportunities for support and risks of dependency, depersonalization, or emotional distortion, as shown across the studies analyzed [5,11,43].

5.1. Implications at the Individual Level: Emotional Self-Regulation and Digital Agency

The reviewed evidence shows that individuals increasingly rely on AI systems—such as mHealth apps, chatbots, and virtual assistants—to manage stress, loneliness, or emotional dysregulation [13,43,44]. These tools can expand emotional resources but also facilitate dependence, reduced autonomy, or a displacement of introspection toward automated scripts.
Accordingly, individuals need enhanced emotional and critical digital literacy to distinguish when AI contributes to well-being and when it undermines self-regulation [45,46]. Well-being programs, both personal and institutional, should incorporate:
  • Training in recognizing synthetic empathy and its limits [5,28].
  • Boundaries for digital companionship, preventing excessive reliance on simulated emotional support.
  • Strategies for conscious digital disconnection, especially for users of personalized or always-on systems.
This aligns with the psychosocial–relational dimension of Figure 2, which emphasizes the need to preserve autonomy and authentic emotional experience.

5.2. Implications for Organizations: Human-Centered Digital Ecosystems

Across organizational studies, AI adoption improves efficiency but can simultaneously erode professional identity, weaken belonging, or induce digital fatigue [7,18,19,40]. The findings show that employee well-being emerges from the interplay between technological configurations and relational climates.
Thus, organizations must develop human-centered digital ecosystems, where AI enhances—not replaces—empathy, trust, and meaningful interaction. This includes:
  • Training leaders in conscious digital leadership, integrating emotional competencies with ethical evaluation of AI use [47].
  • Designing workflows that avoid emotional over-automation, ensuring space for human deliberation and interpersonal connection.
  • Monitoring for algorithmic pressure, surveillance perceptions, and relational erosion, risks repeatedly identified in 2020–2025 evidence.
Organizations should apply the technological–structural level of the conceptual model, ensuring that systems are implemented with relational sensitivity and ethical safeguards [48].

5.3. Implications for Public Institutions and Policymakers: Emotional Equity and Digital Rights

Empirical findings suggest that AI benefits are unevenly distributed and may reproduce or amplify emotional inequalities, particularly in settings with limited digital literacy or weak regulatory frameworks [11,16]. In response, public policy must incorporate emotional well-being as a core criterion in AI governance.
Priority actions include:
  • Algorithmic transparency requirements that reveal how emotional data are processed.
  • Affective data protection policies that safeguard users from unintended emotional exposure [8,35].
  • Regulations governing therapeutic, educational, and service-oriented AI, ensuring that systems do not replace human support where relational sensitivity is essential.
  • Digital inclusion strategies that reduce cultural and technological gaps [1].
This corresponds to the ethical–existential level of the model, ensuring dignity, safety, and affective equity.

5.4. Implications for Healthcare and Education Systems: Responsible Emotional Mediation

Evidence from mental health and education demonstrates that AI can support diagnosis, monitoring, and emotional accompaniment [3,30,31,36]. However, these systems can also generate depersonalization, emotional outsourcing, or diminished spontaneity [4,7].
Therefore, institutions should:
  • Integrate impact assessment protocols that evaluate AI-mediated emotional risks, such as dependency, reduced introspection, or decreased belonging.
  • Use AI tools as adjuncts—not replacements—for human professionals, reinforcing hybrid models of care [12].
  • Ensure that adaptive educational systems incorporate emotion-sensitive modules that support creativity, intrinsic motivation, and human connection [1,34].
This reinforces the need for relational designs that maintain the centrality of human sensitivity.

5.5. Implications at the Community and Societal Level: Toward Emotionally Sustainable Digital Cultures

The studies reviewed highlight the emergence of new digital communities and affective networks shaped by AI-mediated interactions. These environments can foster belonging but also create misinformation, comparison pressures, or emotional fragmentation [9,15].
To build emotionally sustainable technological cultures, communities should promote:
  • Collective emotional literacy, enabling shared reflection on technology’s role in shaping feelings and relationships.
  • Community norms that prioritize empathy, cooperation, and ethical digital coexistence.
  • Practices that recognize both the opportunities and vulnerabilities inherent in AI-mediated social life.
Ultimately, societal well-being requires developing a mature emotional culture capable of integrating AI without sacrificing human meaning, dignity, and relational depth [49].

5.6. Closing Synthesis

Together, these practical implications demonstrate that the challenges and opportunities of AI-mediated emotional well-being are best addressed through integrated, multilevel, and ethically grounded strategies, consistent with the conceptual model developed in this study. Emotional well-being in AI-driven societies depends on:
  • Human autonomy;
  • Relational authenticity;
  • Empathic leadership;
  • Emotionally aware policy;
  • Responsible technological design.
Only through such coordinated efforts can AI contribute to healthier, fairer, and more human-centered emotional ecosystems.

6. Limitations and Future Research Lines

This study is framed within the historical period from 2020 to 2025, a phase characterized by unprecedented acceleration in the development and adoption of artificial intelligence (AI) technologies. Although this time frame is suitable to capture the most recent transformations in the relationship between AI and emotional well-being, it also entails an inherent limitation: the speed of technological change exceeds the academic capacity to construct stable theoretical frameworks. For this reason, the findings should be interpreted as an analytical snapshot of a rapidly evolving phenomenon rather than as a definitive synthesis.
A second limitation derives from the theoretical and reflective nature of the adopted approach. This article does not present direct empirical data; instead, it offers a critical review supported by interdisciplinary scientific literature. While this strategy allows for identifying emerging trends, psychosocial tensions, and ethical dilemmas, it limits the ability to quantify the effects of AI on specific variables such as anxiety, empathy, or subjective well-being. Consequently, future studies should incorporate mixed methodologies—quantitative, qualitative, and experimental—to triangulate empirical and conceptual evidence and deepen our understanding of how algorithmic mediation influences emotional health.
Contextual heterogeneity represents another significant limitation. The use, acceptance, and impact of AI tools—such as mHealth systems, virtual assistants, or therapeutic chatbots—vary widely across cultural, socioeconomic, and technological settings. These differences shape experiences of digital well-being and restrict the generalizability of results. Therefore, it is essential to advance toward comparative and cross-cultural research that examines how educational, social, and economic factors modulate the relationship between AI and emotional well-being. Particular attention should be given to perspectives from the Global South, where infrastructure gaps and regulatory limitations create emotionally vulnerable scenarios that remain underexplored [50,51,52].
In addition, most of the reviewed studies rely on short-term observations or controlled environments. This lack of longitudinal evidence limits the understanding of how sustained interaction with AI reshapes attachment patterns, emotional autonomy, or perceptions of social support over time. Future research should therefore examine the long-term evolution of these dynamics to distinguish between healthy emotional adaptation and affective dependency on intelligent systems.
The findings of this study—summarized in the three-level conceptual model (technological–structural, psychosocial–relational, and ethical–existential)—suggest several specific avenues for future research:
  • Mechanisms of Algorithmic Affectivity: Future studies should investigate how AI systems detect, categorize, and modulate emotions, and how these mechanisms influence the psychosocial–relational level identified in this review. Understanding these processes is essential to prevent automated emotional manipulation and to refine ethical principles of fairness and transparency.
  • Digital Emotional Identity and the Construction of the Algorithmic Self: Results show that emotional expression in digital environments is increasingly shaped by algorithmic infrastructures. Research should therefore examine how individuals build, perform, and negotiate emotions in mediated contexts and how these processes affect authenticity, trust, and identity formation over time.
  • AI-Assisted Emotional Regulation and the Autonomy–Dependence Continuum: Given the ambivalence observed in the literature—AI can either enhance self-regulation or foster dependence—future research should specify under which conditions AI-assisted regulation supports emotional autonomy and when it risks substituting or diminishing introspective capacities.
  • Longitudinal Impacts of Hybrid Emotional Ecosystems: The conceptual model emphasizes the circular dynamic through which humans feed algorithms and algorithms shape human emotional experience. Longitudinal research is needed to understand how this loop evolves, particularly in domains highlighted in the findings: mental health, organizational well-being, and education.
  • Cultural and Structural Moderators in AI-mediated Well-being: Considering the contextual variability documented in the results, cross-cultural studies should identify structural moderators—such as technological literacy, socioeconomic inequality, and cultural norms—that condition whether AI operates as a facilitator or disruptor of emotional well-being.
  • Ethical Architecture of Emotional Data: As several studies highlighted persistent dilemmas regarding affective privacy, algorithmic opacity, and emotional dignity, future research should develop frameworks capable of evaluating how emotional data are collected, interpreted, and used. This includes exploring regulatory strategies inspired by standards such as IEEE 7010 [33].
  • Relational Dynamics in Human–AI Emotional Interaction: The review shows tensions between simulated empathy, connected loneliness, and absent presence. Future studies should analyze these relational paradoxes in depth, examining how empathy, trust, and perceived support are redistributed within hybrid environments where human and non-human actors coexist.
Altogether, these lines of research advance a more specific, contextualized, and conceptually grounded agenda for future studies on emotional well-being in the age of artificial intelligence. The central challenge will be to construct integrative models that capture not only the technological effects of AI but also the cultural, moral, and affective dimensions that shape human experience in increasingly algorithmic societies.

7. Conclusions

The analysis of literature published between 2020 and 2025 shows that artificial intelligence (AI) has become a central mediator of contemporary emotional life, not simply by expanding technological capabilities but by reshaping how individuals perceive, regulate, and express their emotions across clinical, organizational, and educational contexts. The findings confirm that AI is no longer a peripheral tool but an emergent relational and affective agent—one that participates in the co-construction of well-being, vulnerability, and emotional meaning in digital environments.
The main contribution of this study lies in proposing a holistic and theoretically integrated understanding of AI-mediated emotional well-being, articulated through the three-level conceptual model (technological–structural, psychosocial–relational, and ethical–existential). This model highlights that emotional well-being is neither a purely individual state nor a purely technological outcome; instead, it emerges from the dynamic, circular, and bidirectional interactions between human experiences and algorithmic systems. The reviewed evidence shows that AI shapes emotional experience by organizing infrastructures of support, mediating relational dynamics, and generating new ethical dilemmas around autonomy, authenticity, and affective privacy.
Across domains, the findings reveal that AI functions as an ambivalent catalyst. It enables greater accessibility to psychological resources [30,31], fosters creativity and adaptive learning [1,34], and supports emotional regulation and monitoring [12]. Yet, it also introduces vulnerabilities: affective dependence [6], depersonalization in work and care contexts [5,19], algorithmic fatigue, and forms of emotional outsourcing that may erode spontaneity or reflective awareness [4,7]. This duality underscores that digital well-being depends not only on system performance but on the ethical and relational design choices embedded in AI.
A central insight of this reflection is that fragmented approaches to AI and well-being are insufficient. Clinical, organizational, and educational studies tend to conceptualize emotional well-being in isolation, often focusing either on functional benefits or on risks of dehumanization. By integrating these perspectives into a unified psychosocial and ethical framework, this article demonstrates that emotional well-being in the algorithmic age is a hybrid construct, shaped by technological infrastructures, relational interactions, and moral expectations. This integration responds directly to the knowledge gap identified in the Introduction section and offers a conceptual foundation for understanding the paradoxes that characterize AI-mediated emotional life: increased support alongside increased dependency, enhanced personalization alongside diminished authenticity.
The findings also reinforce the importance of interdisciplinary dialogue, connecting psychology, AI ethics, human–computer interaction, sociology, and organizational behavior. Understanding AI as part of an emerging emotional ecosystem requires approaches capable of examining not only emotional outcomes but also the cultural, relational, and moral conditions that make such outcomes possible. The role of AI in emotional life cannot be fully understood through technological determinism or moral alarmism; instead, it demands a critical, contextual, and relational perspective.
Ultimately, this article argues that the key question for the future is not whether AI improves or harms emotional well-being, but under what conditions it contributes to autonomy, authenticity, and psychological flourishing. Sustainable digital well-being requires designing technologies with human intentionality, fostering ethical and emotionally conscious leadership, reducing structural inequities in access and literacy, and strengthening individuals’ capacity for reflective, autonomous engagement with intelligent systems.
In conclusion, preserving the human dimension in the algorithmic age does not entail rejecting AI but learning to coexist with it responsibly—ensuring that technological innovation serves emotional life rather than shaping it uncritically. AI can become a vehicle for emotional development, support, and resilience, but this potential will only be realized if societies cultivate the ethical, cultural, and relational maturity necessary to guide its evolution. The challenge is collective: to design, regulate, and inhabit AI-mediated environments in ways that protect emotional dignity and enhance, rather than diminish, what fundamentally defines us as human beings—the capacity to feel, understand, and care for others.

Author Contributions

Conceptualization, C.S.-T., J.-A.C.-M. and E.T.-P.; methodology, C.S.-T., J.-A.C.-M. and E.T.-P.; validation, C.S.-T.; formal analysis, C.S.-T.; investigation, C.S.-T., J.-A.C.-M. and E.T.-P.; resources, C.S.-T., J.-A.C.-M. and E.T.-P.; data curation, C.S.-T.; writing—original draft preparation, C.S.-T.; writing—review and editing, C.S.-T., J.-A.C.-M. and E.T.-P.; supervision, J.-A.C.-M. and E.T.-P.; project administration, C.S.-T., J.-A.C.-M. and E.T.-P.; funding acquisition, J.-A.C.-M. and E.T.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIartificial intelligence

References

  1. Lin, H.; Chen, Q. Artificial intelligence (AI)-integrated educational applications and college students’ creativity and academic emotions: Students and teachers’ perceptions and attitudes. BMC Psychol. 2024, 12, 487. [Google Scholar] [CrossRef] [PubMed]
  2. Ozmen Garibay, O.; Winslow, B.; Andolina, S.; Antona, M.; Bodenschatz, A.; Coursaris, C.; Xu, W. Six human-centered artificial intelligence grand challenges. Int. J. Hum.–Comput. Interact. 2023, 39, 391–437. [Google Scholar] [CrossRef]
  3. Pataranutaporn, P.; Danry, V.; Leong, J.; Punpongsanon, P.; Novy, D.; Maes, P.; Sra, M. AI-generated characters for supporting personalized learning and well-being. Nat. Mach. Intell. 2021, 3, 1013–1022. [Google Scholar] [CrossRef]
  4. Tuomi, I. Artificial intelligence, 21st century competences, and socio-emotional learning in education: More than high-risk? Eur. J. Educ. 2022, 57, 601–619. [Google Scholar] [CrossRef]
  5. Sedlakova, J.; Trachsel, M. Conversational artificial intelligence in psychotherapy: A new therapeutic tool or agent? Am. J. Bioeth. 2023, 23, 4–13. [Google Scholar] [CrossRef]
  6. Uysal, E.; Alavi, S.; Bezençon, V. Trojan horse or useful helper? A relationship perspective on artificial intelligence assistants with humanlike features. J. Acad. Mark. Sci. 2022, 50, 1153–1175. [Google Scholar] [CrossRef]
  7. Velastegui-Hernandez, D.C.; Rodriguez-Pérez, M.L.; Salazar-Garcés, L.F. Impact of Artificial Intelligence on learning behaviors and psychological well-being of college students. Salud Cienc. Y Tecnol.—Ser. De Conf. 2023, 2, 582. [Google Scholar] [CrossRef]
  8. Sood, M.S.; Gupta, A. The Impact of Artificial intelligence on Emotional, Spiritual and Mental wellbeing: Enhancing or Diminishing Quality of Life. Am. J. Psychiatr. Rehabil. 2025, 28, 298–312. [Google Scholar] [CrossRef]
  9. Wang, Y.C.; Uysal, M. Artificial intelligence-assisted mindfulness in tourism, hospitality, and events. Int. J. Contemp. Hosp. Manag. 2024, 36, 1262–1278. [Google Scholar] [CrossRef]
  10. Lee, D.; Yoon, S.N. Application of artificial intelligence-based technologies in the healthcare industry: Opportunities and challenges. Int. J. Environ. Res. Public Health 2021, 18, 271. [Google Scholar] [CrossRef]
  11. Moghayedi, A.; Michell, K.; Awuzie, B.; Adama, U.J. A comprehensive analysis of the implications of artificial intelligence adoption on employee social well-being in South African facility management organizations. J. Corp. Real Estate 2024, 26, 237–261. [Google Scholar] [CrossRef]
  12. Nashwan, A.J.; Gharib, S.; Alhadidi, M.; El-Ashry, A.M.; Alamgir, A.; Al-Hassan, M.; Khedr, M.A.; Dawood, S.; Abufarsakh, B. Harnessing artificial intelligence: Strategies for mental health nurses in optimizing psychiatric patient care. Issues Ment. Health Nurs. 2023, 44, 1020–1034. [Google Scholar] [CrossRef]
  13. Alhuwaydi, A.M. Exploring the role of artificial intelligence in mental healthcare: Current trends and future directions—A narrative review for a comprehensive insight. Risk Manag. Healthc. Policy 2024, 17, 1339–1348. [Google Scholar] [CrossRef] [PubMed]
  14. Gual-Montolio, P.; Jaén, I.; Martínez-Borba, V.; Castilla, D.; Suso-Ribera, C. Using artificial intelligence to enhance ongoing psychological interventions for emotional problems in real-or close to real-time: A systematic review. Int. J. Environ. Res. Public Health 2022, 19, 7737. [Google Scholar] [CrossRef] [PubMed]
  15. Shahzad, M.F.; Xu, S.; Lim, W.M.; Yang, X.; Khan, Q.R. Artificial intelligence and social media on academic performance and mental well-being: Student perceptions of positive impact in the age of smart learning. Heliyon 2024, 10, e29523. [Google Scholar] [CrossRef]
  16. Makridis, C.A.; Mishra, S. Artificial intelligence as a service, economic growth, and well-being. J. Serv. Res. 2022, 25, 505–520. [Google Scholar] [CrossRef]
  17. Shaikh, F.; Afshan, G.; Anwar, R.S.; Abbas, Z.; Chana, K.A. Analyzing the impact of artificial intelligence on employee productivity: The mediating effect of knowledge sharing and well-being. Asia Pac. J. Hum. Resour. 2023, 61, 794–820. [Google Scholar] [CrossRef]
  18. Xu, G.; Xue, M.; Zhao, J. The relationship of artificial intelligence opportunity perception and employee workplace well-being: A moderated mediation model. Int. J. Environ. Res. Public Health 2023, 20, 1974. [Google Scholar] [CrossRef]
  19. Cramarenco, R.E.; Burcă-Voicu, M.I.; Dabija, D.C. The impact of artificial intelligence (AI) on employees’ skills and well-being in global labor markets: A systematic review. Oeconomia Copernic. 2023, 14, 731–767. [Google Scholar] [CrossRef]
  20. Tang, P.M.; Koopman, J.; Mai, K.M.; De Cremer, D.; Zhang, J.H.; Reynders, P.; Chen, I. No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. J. Appl. Psychol. 2023, 108, 1766. [Google Scholar] [CrossRef]
  21. Thakkar, A.; Gupta, A.; De Sousa, A. Artificial intelligence in positive mental health: A narrative review. Front. Digit. Health 2024, 6, 1280235. [Google Scholar] [CrossRef] [PubMed]
  22. Dhimolea, T.K.; Kaplan-Rakowski, R.; Lin, L. Supporting social and emotional well-being with artificial intelligence. In Bridging Human Intelligence and Artificial Intelligence; Springer International Publishing: Cham, Switzerland, 2022; pp. 125–138. [Google Scholar] [CrossRef]
  23. Tornero-Costa, R.; Martinez-Millana, A.; Azzopardi-Muscat, N.; Lazeri, L.; Traver, V.; Novillo-Ortiz, D. Methodological and quality flaws in the use of artificial intelligence in mental health research: Systematic review. JMIR Ment. Health 2023, 10, e42045. [Google Scholar] [CrossRef] [PubMed]
  24. Booth, A.; Martyn-St James, M.; Clowes, M.; Sutton, A. Systematic Approaches to a Successful Literature Review, 2nd ed.; Sage Publications: Thousand Oaks, CA, USA, 2021. [Google Scholar]
  25. Torraco, R.J. Writing integrative literature reviews: Using the past and present to explore the future. Hum. Resour. Dev. Rev. 2016, 15, 404–428. [Google Scholar] [CrossRef]
  26. Snyder, H. Literature review as a research methodology: An overview and guidelines. J. Bus. Res. 2019, 104, 333–339. [Google Scholar] [CrossRef]
  27. Bankins, S.; Ocampo, A.C.; Marrone, M.; Restubog, S.L.D.; Woo, S.E. A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. J. Organ. Behav. 2024, 45, 159–182. [Google Scholar] [CrossRef]
  28. Beg, M.J.; Verma, M.; Vishvak Chanthar, K.M.M.; Verma, M.K. Artificial intelligence for psychotherapy: A review of the current state and future directions. Indian J. Psychol. Med. 2025, 47, 314–325. [Google Scholar] [CrossRef]
  29. Cabrera, J.; Loyola, M.S.; Magaña, I.; Rojas, R. Ethical dilemmas, mental health, artificial intelligence, and llm-based chatbots. In International Work-Conference on Bioinformatics and Biomedical Engineering; Springer Nature: Cham, Switzerland, 2023; pp. 313–326. [Google Scholar] [CrossRef]
  30. Li, H.; Zhang, R.; Lee, Y.C.; Kraut, R.E.; Mohr, D.C. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digit. Med. 2023, 6, 236. [Google Scholar] [CrossRef]
  31. Alam, M.M.D.; Alam, M.Z.; Rahman, S.A.; Taghizadeh, S.K. Factors influencing mHealth adoption and its impact on mental well-being during COVID-19 pandemic: A SEM-ANN approach. J. Biomed. Inform. 2021, 116, 103722. [Google Scholar] [CrossRef]
  32. Olawade, D.B.; Wada, O.Z.; Odetayo, A.; David-Olawade, A.C.; Asaolu, F.; Eberhardt, J. Enhancing mental health with Artificial Intelligence: Current trends and future prospects. J. Med. Surg. Public Health 2024, 3, 100099. [Google Scholar] [CrossRef]
  33. Schiff, D.; Ayesh, A.; Musikanski, L.; Havens, J.C. IEEE 7010: A new standard for assessing the well-being implications of artificial intelligence. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Virtual, 11–14 October 2020; IEEE: New York, NY, USA, 2020; pp. 2746–2753. [Google Scholar] [CrossRef]
  34. Dai, Y.; Chai, C.S.; Lin, P.Y.; Jong, M.S.Y.; Guo, Y.; Qin, J. Promoting students’ well-being by developing their readiness for the artificial intelligence age. Sustainability 2020, 12, 6597. [Google Scholar] [CrossRef]
  35. Jeyaraman, M.; Balaji, S.; Jeyaraman, N.; Yadav, S. Unraveling the ethical enigma: Artificial intelligence in healthcare. Cureus 2023, 15, e43262. [Google Scholar] [CrossRef]
  36. Vistorte, A.O.R.; Deroncele-Acosta, A.; Ayala, J.L.M.; Barrasa, A.; López-Granero, C.; Martí-González, M. Integrating artificial intelligence to assess emotions in learning environments: A systematic literature review. Front. Psychol. 2024, 15, 1387089. [Google Scholar] [CrossRef] [PubMed]
  37. Murugesan, U.; Subramanian, P.; Srivastava, S.; Dwivedi, A. A study of artificial intelligence impacts on human resource digitalization in industry 4.0. Decis. Anal. J. 2023, 7, 100249. [Google Scholar] [CrossRef]
  38. Mendy, J.; Jain, A.; Thomas, A. Artificial intelligence in the workplace–challenges, opportunities and HRM framework: A critical review and research agenda for change. J. Manag. Psychol. 2025, 40, 517–538. [Google Scholar] [CrossRef]
  39. Alhwaiti, M. Acceptance of artificial intelligence application in the post-COVID ERA and its impact on faculty members’ occupational well-being and teaching self-efficacy: A path analysis using the utaut 2 model. Appl. Artif. Intell. 2023, 37, 2175110. [Google Scholar] [CrossRef]
  40. Malik, N.; Tripathi, S.N.; Kar, A.K.; Gupta, S. Impact of artificial intelligence on employees working in industry 4.0 led organizations. Int. J. Manpow. 2022, 43, 334–354. [Google Scholar] [CrossRef]
  41. Prentice, C.; Dominique Lopes, S.; Wang, X. Emotional intelligence or artificial intelligence–an employee perspective. J. Hosp. Mark. Manag. 2020, 29, 377–403. [Google Scholar] [CrossRef]
  42. Santiago-Torner, C.; Jiménez-Pérez, Y.; Tarrats-Pons, E. Ethical climate, intrinsic motivation, and affective commitment: The impact of depersonalization. Eur. J. Investig. Health Psychol. Educ. 2025, 15, 55. [Google Scholar] [CrossRef]
  43. Chin, H.; Song, H.; Baek, G.; Shin, M.; Jung, C.; Cha, M.; Choi, J.; Cha, C. The potential of chatbots for emotional support and promoting mental well-being in different cultures: Mixed methods study. J. Med. Internet Res. 2023, 25, e51712. [Google Scholar] [CrossRef]
  44. Denecke, K.; Abd-Alrazaq, A.; Househ, M. Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges. In Multiple Perspectives on Artificial Intelligence in Healthcare. Lecture Notes in Bioengineering; Househ, M., Borycki, E., Kushniruk, A., Eds.; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  45. Santiago-Torner, C. Clima ético benevolente y autoeficacia laboral. La mediación de la motivación intrínseca y la moderación del compromiso afectivo en el sector eléctrico colombiano. Lect. Econ. 2024, 101, 235–269. [Google Scholar] [CrossRef]
  46. Santiago-Torner, C. Creativity and emotional exhaustion in virtual work environments: The ambiguous role of work autonomy. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 2087–2100. [Google Scholar] [CrossRef]
  47. Santiago-Torner, C. Ethical leadership and creativity in employees with University education: The moderating effect of high intensity telework. Intang. Cap. 2023, 19, 393–414. [Google Scholar] [CrossRef]
  48. Santiago-Torner, C. Teleworking and emotional exhaustion in the Colombian electricity sector: The mediating role of affective commitment and the moderating role of creativity. Intang. Cap. 2023, 19, 207–258. [Google Scholar] [CrossRef]
  49. Santiago-Torner, C.; Corral-Marfil, J.A.; Jiménez-Pérez, Y.; Tarrats-Pons, E. Impact of ethical leadership on autonomy and self-efficacy in virtual work environments: The disintegrating effect of an egoistic climate. Behav. Sci. 2025, 15, 95. [Google Scholar] [CrossRef]
  50. Santiago-Torner, C.; Corral-Marfil, J.A.; Tarrats-Pons, E. The relationship between ethical leadership and emotional exhaustion in a virtual work environment: A moderated mediation model. Systems 2024, 12, 454. [Google Scholar] [CrossRef]
  51. Santiago-Torner, C. Teletrabajo y clima ético: El efecto mediador de la autonomía laboral y del compromiso organizacional. Rev. Metod. Cuant. Econ. Empres. 2023, 36, 1–23. [Google Scholar] [CrossRef]
  52. Santiago-Torner, C. Relación entre liderazgo ético y motivación intrínseca: El rol mediador de la creatividad y el múltiple efecto moderador del compromiso de continuidad. Rev. Metod. Cuant. Econ. Empres. 2023, 36, 1–27. [Google Scholar] [CrossRef]
Figure 1. Flow diagram.
Figure 1. Flow diagram.
Societies 16 00006 g001
Figure 2. Multilevel conceptual model of AI-mediated emotional well-being. Note: The model depicts three interconnected levels—technological–structural, psychosocial–relational, and ethical–existential—linked through bidirectional pathways that form a continuous circular loop. In this configuration, technological–structural conditions shape the nature and quality of human–AI interactions at the psychosocial–relational level; these interactions, in turn, generate new ethical–existential dilemmas concerning autonomy, emotional privacy, authenticity, and human agency. The ethical responses and value frameworks emerging at this third level subsequently influence how technologies are designed, regulated, and integrated, thus feeding back into the technological–structural level and completing the cycle. This circular architecture highlights the dynamic co-evolution of technical infrastructures, relational processes, and ethical–existential considerations in shaping contemporary emotional well-being.
Figure 2. Multilevel conceptual model of AI-mediated emotional well-being. Note: The model depicts three interconnected levels—technological–structural, psychosocial–relational, and ethical–existential—linked through bidirectional pathways that form a continuous circular loop. In this configuration, technological–structural conditions shape the nature and quality of human–AI interactions at the psychosocial–relational level; these interactions, in turn, generate new ethical–existential dilemmas concerning autonomy, emotional privacy, authenticity, and human agency. The ethical responses and value frameworks emerging at this third level subsequently influence how technologies are designed, regulated, and integrated, thus feeding back into the technological–structural level and completing the cycle. This circular architecture highlights the dynamic co-evolution of technical infrastructures, relational processes, and ethical–existential considerations in shaping contemporary emotional well-being.
Societies 16 00006 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Santiago-Torner, C.; Corral-Marfil, J.-A.; Tarrats-Pons, E. Artificial Intelligence and the Reconfiguration of Emotional Well-Being (2020–2025): A Critical Reflection. Societies 2026, 16, 6. https://doi.org/10.3390/soc16010006

AMA Style

Santiago-Torner C, Corral-Marfil J-A, Tarrats-Pons E. Artificial Intelligence and the Reconfiguration of Emotional Well-Being (2020–2025): A Critical Reflection. Societies. 2026; 16(1):6. https://doi.org/10.3390/soc16010006

Chicago/Turabian Style

Santiago-Torner, Carlos, José-Antonio Corral-Marfil, and Elisenda Tarrats-Pons. 2026. "Artificial Intelligence and the Reconfiguration of Emotional Well-Being (2020–2025): A Critical Reflection" Societies 16, no. 1: 6. https://doi.org/10.3390/soc16010006

APA Style

Santiago-Torner, C., Corral-Marfil, J.-A., & Tarrats-Pons, E. (2026). Artificial Intelligence and the Reconfiguration of Emotional Well-Being (2020–2025): A Critical Reflection. Societies, 16(1), 6. https://doi.org/10.3390/soc16010006

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop