Abstract
Between 2020 and 2025, rapid advances in artificial intelligence (AI) reshaped how individuals access emotional support, express feelings, and build interpersonal trust. This article offers a critical reflection—based on an analytical review of 40 peer-reviewed studies—on the psychosocial, ethical, and sociotechnical tensions that characterize AI-mediated emotional well-being. We document both opportunities (expanded access to support, personalization, and early detection) and risks (simulated empathy, affective dependence, algorithmic fatigue, and erosion of relational authenticity). Methodologically, we applied a three-phase critical review: exploratory reading, thematic clustering, and interpretive synthesis; sources were retrieved from Scopus, Web of Science and PsycINFO and filtered by relevance, methodological rigor, and topical fit. We propose a conceptual model integrating three interdependent levels—technological–structural, psychosocial–relational, and ethical–existential—and argue for a sociotechnical perspective that recognizes AI as a co-constitutive actor in emotional ecologies. The article closes with targeted research agendas and policy recommendations to foster human-centered AI that preserves emotional autonomy and equity.
1. Introduction
Between 2020 and 2025, the accelerated expansion of artificial intelligence (AI) profoundly reshaped the ways people work, learn, communicate, and manage their emotional well-being. Advances in machine learning, natural language processing, and generative systems have made AI an integral part of everyday life [1,2]. This integration has redefined human interaction patterns and the very notion of psychological balance within a society increasingly mediated by algorithms [3,4]. However, alongside promises of efficiency, personalization, and emotional assistance, new ethical and psychosocial questions have emerged regarding AI’s effects on identity, autonomy, and the quality of affective bonds [5,6,7]. Although the temporal frame “2020–2025” might appear arbitrary at first sight, it corresponds to a period marked by (a) the global digital acceleration triggered by COVID-19, (b) the mass adoption of conversational agents and generative AI, and (c) the consolidation of empirical studies that allow for a coherent synthesis of psychosocial trends. This interval therefore provides a conceptually meaningful window to examine how AI-mediated emotional experiences evolved during a phase of unprecedented technological uptake.
From a psychosocial perspective, the advent of AI has altered the dynamics of human relationships, shaping both perceptions of social support and experiences of loneliness. The so-called “emotional hyperconnectivity” has partly replaced face-to-face encounters with digital interactions, in which emotions are interpreted, classified, or even generated by automated systems [8,9]. While this transformation has expanded access to support resources and reduced geographical and economic barriers [10,11,12], it has also raised concerns about depersonalization and the erosion of emotional authenticity [5,13,14].
The emergence of generative AI tools has intensified the tension between autonomy and technological dependence. Interactive AI systems and self-care applications offer immediacy, privacy, and assistance—qualities particularly appealing to those seeking emotional support [1,3]. Yet these same features can foster excessive reliance on technology for emotional decision-making, encouraging dependency on devices and gradually replacing human contact with digital bonds [6,15]. This phenomenon illustrates what has been termed the “digital affective paradox”: as technological connectivity increases, people do not necessarily feel more connected. In fact, the more they are accompanied by digital systems, the harder it becomes to sustain genuine relationships and a stable sense of belonging [7,9].
Although recent literature on AI and well-being has grown considerably, most studies focus on clinical or functional outcomes—such as stress reduction, learning enhancement, or work efficiency [16,17,18]. Few, however, examine the relational, identity-related, and ethical implications of coexisting with intelligent systems [1,2]. This reveals a more precise knowledge gap: current research lacks integrative frameworks capable of explaining the constitutive tensions that arise when humans and AI systems co-shape affective experiences. Existing studies describe benefits and risks, but they rarely articulate how technological infrastructures, psychosocial processes, and ethical dilemmas interact dynamically. In particular, socio-technical and socio-material perspectives—central for understanding human–technology co-constitution—remain underutilized in the emotional well-being literature.
Moreover, existing studies tend to approach the relationship between AI and well-being from isolated disciplinary domains—clinical, educational, or organizational—without exploring their intersections [3,4]. While mental health research highlights AI’s therapeutic potential and its ability to broaden access to care [5,12], organizational studies warn of digital stress and loss of meaning at work [19,20]. This lack of integration constrains a comprehensive understanding of AI’s impact on human well-being and hinders the development of ethical policies that balance innovation with humanization [2,21]. Thus, the gap is not merely empirical but conceptual: we lack a model that situates emotional well-being within a multilayered system where AI operates simultaneously as a technological infrastructure, a relational mediator, and an ethical actor.
It is therefore essential to promote a critical reflection that transcends technological enthusiasm and focuses on the relational nature of emotional well-being [4,22]. Understanding AI’s impact from this lens entails recognizing that well-being depends not merely on access to digital tools but on the quality of human experiences those tools facilitate or transform [9]. In this sense, AI should be understood not as a functional mechanism but as a relational actor actively shaping emotions, values, and contemporary social interactions [5,6].
The pandemic and post-pandemic contexts further accelerated this emotional-technological integration [16]. During the COVID-19 pandemic, the massive use of digital platforms and virtual psychological support assistants demonstrated both the therapeutic potential of technology and its ethical limitations [12,23]. The experiences accumulated between 2020 and 2025 reveal a significant evolution in how individuals experience intimacy, vulnerability, and emotional companionship in AI-mediated environments [7,9]. This period has therefore become an analytically coherent corpus from which to examine emerging psychosocial patterns, rather than a merely chronological delimitation. Yet, an integrative understanding capable of articulating these insights from an interdisciplinary and ethical standpoint remains scarce [2,21].
Against this backdrop, the present article offers a critical reflection on the psychosocial impacts of artificial intelligence on emotional well-being, drawing upon studies published between 2020 and 2025 in journals ranked in Q1–Q3 of Journal Citation Reports (JCR) and Scimago Journal Rank (SJR). Its aim is to analyze the tensions, paradoxes, and ethical dilemmas arising from the intensive use of intelligent technologies, acknowledging both their benefits and the risks they pose to collective psychological health [5,22].
Furthermore, this work seeks to bridge the gap between clinical and technological approaches through a relational and contextual perspective that situates emotional well-being at the core of the debate on AI [2,4]. It assumes that well-being is a hybrid construct shaped by the continuous interaction between human and artificial intelligence, in which technology can either mitigate or intensify distress [9,21]. Finally, the study invites an interdisciplinary dialogue aimed at fostering a more conscious, ethical, and emotionally sustainable relationship with AI, recognizing that human well-being largely depends on the quality of both human and digital relationships shaping contemporary emotional life [1,6].
Methodology
This study adopts a critical reflection approach, grounded in an analytical review of scientific literature published between 2020 and 2025 on the psychosocial impact of artificial intelligence (AI) on emotional well-being. This approach is particularly suitable when the purpose is not to measure or empirically verify a phenomenon but to interpret, contextualize, and reconstruct it theoretically. As Booth et al. [24] and Torraco [25] argue, critical reviews aim to integrate findings from diverse disciplines and perspectives to achieve a deeper understanding of complex and emerging phenomena.
The choice of this design responds to the dynamic nature of AI, whose social and psychological effects evolve faster than empirical validation can capture. Consequently, this approach moves beyond the mere description of results to focus on meaning-making processes and conceptual tensions arising from the relationship between technology and emotional well-being. According to Snyder [26], critical reviews contribute to identifying knowledge gaps, examining implicit assumptions, and proposing new interpretive frameworks for future research.
Unlike systematic review models or protocols such as PRISMA—which seek comprehensiveness and replicability—the critical reflection approach prioritizes conceptual depth, argumentative coherence, and the capacity for synthesis as indicators of academic rigor. Rather than generating closed conclusions, this type of review opens questions and expands interpretive frameworks from an interdisciplinary standpoint.
The corpus analyzed includes 40 scientific publications selected for their theoretical relevance, quality, and currency. Sources were extracted from Scopus, Web of Science, and PsycINFO, restricting the search to journals indexed in Q1 to Q3 of the SJR or JCR systems. Inclusion criteria considered: (a) thematic relevance—i.e., the relationship between AI and emotional well-being or mental health; (b) temporal relevance (2020–2025); and (c) scientific solidity—ensured through peer review and publication in recognized academic outlets.
To ensure methodological transparency, the search strategy incorporated explicit Boolean strings. The core search formula was: (“artificial intelligence” OR “AI” OR “machine learning” OR “chatbot” OR “conversational agent” OR “generative AI”) AND (“emotional well-being” OR “emotional wellbeing” OR “mental health” OR “emotional support” OR “affective”). Filters included: publication years 2020–2025; language = English; document type = articles or reviews; inclusion only of journals indexed in JCR or SJR (Q1–Q3). The searches were conducted in Scopus, Web of Science, and PsycINFO between February and March 2025. These parameters guaranteed conceptual coherence and methodological robustness.
The selection process followed a structured sequence summarized in Figure 1 (Flow Diagram). After removing duplicates, titles and abstracts were screened for topical alignment; full texts were then evaluated based on inclusion and exclusion criteria. Studies were excluded if they (a) lacked direct relevance to emotional well-being; (b) examined AI only from technical or engineering perspectives; (c) fell outside the 2020–2025-time frame; or (d) did not meet peer-review and indexation criteria. The final sample comprised 40 articles that met all eligibility criteria.
Figure 1.
Flow diagram.
The review process unfolded in three phases: an exploratory reading to identify preliminary categories, a thematic clustering of findings around positive impacts, ethical dilemmas, and emerging challenges, and an interpretive synthesis that connected empirical evidence with conceptual frameworks.
During the exploratory phase, preliminary categories were defined inductively. Two independent readings of approximately 15% of the corpus were used to generate initial codes such as “AI-mediated emotional support,” “erosion of authenticity,” “algorithmic dependence,” and “relational displacement.” These preliminary categories provided the foundation for the subsequent thematic clustering. The clustering phase grouped studies based on converging patterns at cognitive, affective, relational, and ethical levels. This process made it possible to identify latent tensions—particularly the contradiction between expanded technological accessibility and diminished emotional authenticity—consistently highlighted in the recent literature.
Finally, the interpretive synthesis involved triangulating findings with psychosocial, ethical, and sociotechnical perspectives. This stage sought not only to summarize evidence but to articulate the constitutive tensions that emerge when humans and AI systems co-shape emotional experiences. Interpretive synthesis was supported by constant comparison techniques, allowing the contrasting of conceptual approaches and the identification of emerging theoretical gaps.
Although it does not conform to formal protocols like PRISMA, the study-maintained standards of rigor, transparency, and internal coherence to ensure scientific validity. Consistency between objectives, theoretical framework, and conclusions was guaranteed through conceptual triangulation, that is, the systematic comparison of arguments and evidence from different authors to identify convergences, contradictions, and theoretical gaps [24].
A reflexive and self-critical stance was also adopted regarding the researcher’s role as a mediator of knowledge. Recognizing the limits of interpretation—particularly in a rapidly evolving field—is essential to avoid confirmation bias or technocentric perspectives. Consequently, this critical review does not aim to replace systematic evidence but to complement it, offering a more integrative and human perspective on the psychological and social changes linked to technological expansion.
In summary, the methodology ensures a balance between academic rigor and interpretive flexibility, situating the debate on AI and emotional well-being within an interdisciplinary framework of reflection. Its value lies in generating renewed readings of the phenomenon, identifying conceptual gaps, and contributing to a more ethical, critical, and emotionally sustainable understanding of the relationship between artificial intelligence and human experience.
Figure 1 illustrates the full selection process in six sequential stages, enhancing transparency and facilitating replicability.
3. Discussion of Results
The analysis of the literature published between 2020 and 2025 reveals a profound transformation in the relationship between artificial intelligence (AI) and human emotional well-being. While the reviewed studies consistently show that AI has expanded resources for mental health support and emotional regulation, they also expose psychological, social, and ethical tensions that challenge conventional understandings of well-being. This discussion integrates both dimensions—opportunity and vulnerability—thereby addressing the central theoretical gap identified at the outset: the absence of a holistic and multilevel perspective that explains how AI can simultaneously function as a facilitator of emotional support and a source of emotional disruption. The multilevel conceptual model proposed in Figure 2 provides the framework through which these findings can be critically interpreted.
3.1. Opportunities and Ambivalences in Health Contexts
One of the clearest insights emerging from the literature is that AI has improved the accessibility and personalization of psychological support, particularly during the COVID-19 pandemic. Studies on mHealth applications [31] demonstrated that digital mediation sustained emotional stability in contexts of isolation, helping individuals reinterpret self-care as an interactive and technologically mediated practice [39]. Parallel evidence from Nashwan et al. [12] shows that mental health professionals—including psychiatric nurses—increasingly integrated AI tools into clinical workflows, generating hybrid care models that enhance monitoring and follow-up without replacing the human bond.
However, this same expansion of digital care introduces new ontological and ethical dilemmas. Sedlakova and Trachsel [5] question whether AI systems should be conceptualized solely as tools or as emerging moral actors, a concern echoed by Beg et al. [28] in their notion of “simulated empathy.” Within the logic of the model in Figure 2, these tensions reflect the psychosocial–relational level, where algorithmic responsiveness generates comfort yet simultaneously undermines emotional authenticity and interpersonal reciprocity. Thus, even where clinical benefits are clear, risks of affective dependence and conceptual confusion remain.
3.2. Contradictions in Organizational Settings
In organizational environments, AI’s impact on well-being is equally ambivalent. Research demonstrates that intelligent systems can reduce cognitive load, streamline tasks, and strengthen perceptions of efficacy and satisfaction [17,18]. Yet other studies [19,20] show that excessive automation dilutes professional identity and fosters depersonalization.
Uysal et al. [6] highlight how anthropomorphized AI assistants can establish emotional bonds with employees, functioning as an “affective Trojan horse” that blurs boundaries between collaboration and dependence. Through the lens of Figure 2, these dynamics reflect the interaction between the technological–structural level (algorithmic systems that shape workflows) and the psychosocial–relational level (emerging patterns of dependency, trust, and relational displacement). When organizations prioritize productivity over human connection, the result is a subtle erosion of autonomy and emotional meaning [40].
3.3. Emotional Implications in Educational Contexts
In the educational domain, AI demonstrates significant potential to enhance creativity, emotional self-regulation, and intrinsic motivation [1,34]. Studies by Pataranutaporn et al. [3] show that AI-generated characters can provide personalized support that fosters adaptive learning. Yet, as Tuomi [4] argues, this mediation risks shifting emotional management from students to algorithms, creating a model of “assisted emotional learning” where affective spontaneity diminishes.
Similarly, Velastegui-Hernández et al. [7] and Vistorte et al. [36] warn that overreliance on algorithmic feedback may cultivate an appearance of well-being that masks emotional fragility and dependence. Interpreted through Figure 2, these findings exemplify how educational AI systems operate at the intersection of structural design, relational engagement, and ethical deliberation—raising fundamental questions about how emotional authenticity can be preserved in digital learning ecosystems.
3.4. Structural Inequalities and Cultural Contexts
Structural and contextual factors significantly influence AI’s effects on well-being. Makridis and Mishra [16] argue that the economic growth associated with “artificial intelligence as a service” does not necessarily translate into emotional equity, often benefiting individuals with higher technological literacy. Moghayedi et al. [11] illuminate how AI in Global South workplaces generates both opportunities for inclusion and new forms of precarity. Convergently, Tornero-Costa et al. [23] highlight that research on AI and mental health often suffers from methodological limitations—such as narrow sampling frames and overreliance on quantitative indicators—that obscure cultural and affective diversity.
From the standpoint of Figure 2, these findings reflect the ethical–existential level, where questions of justice, access, and emotional dignity intersect with technological deployment. The literature thus calls for cross-cultural, context-sensitive approaches that acknowledge how digital well-being is shaped by technological power and social inequalities.
3.5. Ethical Vulnerabilities and Regulatory Gaps
The 2020–2025 period is marked by a growing mismatch between rapid innovation and regulatory capacity. Although IEEE 7010 [33] introduced important dimensions such as fairness, autonomy, and subjective satisfaction, persistent vulnerabilities remain. Jeyaraman et al. [35] and Sood & Gupta [8] highlight ongoing risks of emotional manipulation, algorithmic opacity, and affective dependence. In hospitality settings, Wang and Uysal [9] describe “AI-assisted mindfulness,” a practice that can promote self-regulation but may also replace human introspection with automated scripts, raising concerns about emotional authenticity.
The collection of affective data and the opacity of predictive models create what can be described—through the ethical–existential layer of Figure 2—as involuntary emotional exposure, a form of psychological vulnerability emerging from the digitalization of emotional life. The findings underscore the urgency of designing systems that preserve dignity, autonomy, and the right to emotional silence.
3.6. Integrating Evidence Through the Multilevel Model
Despite the risks, the reviewed studies agree that AI can contribute meaningfully to emotional well-being when embedded in ethical, human-centered frameworks. Dhimolea et al. [22] and Thakkar et al. [21] show that AI can strengthen emotional intelligence when interactions are transparent and supportive [41]. Ozmen Garibay et al. [8] identify six key challenges for human-centered AI, emphasizing emotional sensitivity and cultural inclusivity.
These insights converge with the multilevel model proposed in Figure 2, demonstrating that AI-mediated well-being cannot be understood in isolation but emerges from a dynamic interplay between technological infrastructures, relational dynamics, and ethical–existential considerations. Emotional well-being becomes a hybrid construct, co-created through the continuous interaction between humans and artificial systems.
3.7. Synthesis and Forward-Looking Considerations
Between 2020 and 2025, AI has consolidated its role as an “emergent emotional agent”—a system that does not feel yet shapes how individuals experience emotions, seek support, and construct meaning. This duality creates unprecedented opportunities (expanded mental health access, digital emotional competencies) and urgent challenges (emotional equity, authenticity, autonomy, regulatory reform).
Consistent with the theoretical model, the findings confirm that AI-mediated emotional well-being operates across the interconnected technological, psychosocial, and ethical levels. This framework not only integrates fragmented debates across health, organizational, and educational domains but also provides a foundation for future policies and research agendas aimed at developing a more human, conscious, and ethically grounded digital emotional ecosystem.
4. Theoretical Implications
The psychosocial analysis of artificial intelligence (AI) between 2020 and 2025 makes it possible to rethink the relationship between technology, emotions, and human well-being from an interdisciplinary standpoint. The findings show that approaches centered exclusively on technological efficiency or instrumental benefits are inadequate for understanding the phenomenon. Instead, it is necessary to advance toward theoretical models that integrate the emotional, relational, and ethical dimensions of human–machine interaction. Within this perspective, AI should be understood not merely as a functional tool but as a relational agent, one that actively participates in shaping social bonds, perceptions of support, and the construction of the digital self [1,7]. This interpretation aligns with the multi-level structure proposed in Figure 2, emphasizing that emotional experience in the digital era emerges from the interplay between technological infrastructures, psychosocial dynamics, and ethical meaning-making.
One of the central theoretical contributions of this study lies in putting forward an integrative perspective on AI-mediated emotional well-being, articulating three interdependent dimensions: psychological, social, and technological. The psychological dimension concerns the internal processes of self-regulation, emotional reflection, and the redefinition of self-perception generated through constant exposure to automated systems. The social dimension refers to the reconfiguration of interpersonal bonds, collective practices, and emerging forms of digital community. The technological dimension encompasses the algorithmic mechanisms that interpret, classify, and modulate affective states. This multi-layered framework acknowledges that contemporary emotional experience is not produced in isolation but instead unfolds in hybrid environments where human emotionality is partially coded, quantified, and reinterpreted by AI [8,15]. By synthesizing these three components, the study provides a conceptual foundation for future comparative research on the psychological impacts of AI and the relational sustainability of increasingly digitalized environments.
The study also contributes to the debate surrounding the “digital affective paradox,” defined as the persistent tension between emotional connection and emotional disconnection in hyperconnected societies. AI technologies amplify this paradox: while they expand companionship, guidance, and perceived support, they can simultaneously foster emotional dependence on non-human agents. Such ambivalence challenges classical notions of autonomy, presence, and authenticity, requiring renewed attention to concepts such as digital empathy—the capacity of technological interfaces to simulate emotional understanding—and authentic emotional well-being, understood as the alignment between subjective emotional experience and technological mediation [11,14]. From a theoretical standpoint, this tension underscores the need to examine AI not only as a cognitive or instrumental enhancer but also as a co-producer of emotional meaning.
Additionally, the study proposes expanding the conceptual vocabulary of well-being psychology through the notion of algorithmic emotional well-being, defined as the way algorithms filter, quantify, and modulate human emotions. This concept does not promote a deterministic or pessimistic view; rather, it invites reflection on how personalization mechanisms—embedded in self-care applications, mHealth platforms, and emotional support systems—are reshaping emotional balance and decision-making processes. During 2020–2025, the widespread adoption of these tools illustrated that technological mediation can empower self-care practices but may also reduce emotional autonomy depending on users’ levels of digital literacy and critical awareness [31]. Theoretically, this suggests that emotional well-being can no longer be analyzed without acknowledging the algorithmic architectures that sustain it.
From a sociotechnical standpoint, this work invites scholars to consider AI as a new social and affective actor, one that not only reflects human emotions but also shapes them by anticipating affective states, recommending behaviors, and generating simulated empathetic responses. Understanding AI therefore entails recognizing its agency within a shared emotional ecology where humans and intelligent systems co-construct meaning, expectations, and emotional norms.
Finally, this theoretical reflection reinforces the need for an interdisciplinary framework that brings together social psychology, technological ethics, and digital communication studies. Such integration is essential for explaining emerging phenomena such as emotional depersonalization, affective hyperconnectivity, and the delegation of emotional decisions to automated systems [42]. Collectively, these contributions help address the knowledge gap identified in this article by offering a robust conceptual foundation for future research on how AI reshapes emotional experience and the dynamics of well-being in digital societies.
5. Practical Implications
The practical implications derived from this critical review highlight that emotional well-being in the age of artificial intelligence (AI) depends not only on technological progress but, fundamentally, on the ethical, relational, and institutional choices guiding its design and use. Consistent with the three-level model proposed in Figure 2—technological–structural, psychosocial–relational, and ethical–existential—the findings reveal that AI-mediated emotional well-being requires coordinated actions from individuals, organizations, public institutions, and digital communities. Each level interacts with the others and generates both opportunities for support and risks of dependency, depersonalization, or emotional distortion, as shown across the studies analyzed [5,11,43].
5.1. Implications at the Individual Level: Emotional Self-Regulation and Digital Agency
The reviewed evidence shows that individuals increasingly rely on AI systems—such as mHealth apps, chatbots, and virtual assistants—to manage stress, loneliness, or emotional dysregulation [13,43,44]. These tools can expand emotional resources but also facilitate dependence, reduced autonomy, or a displacement of introspection toward automated scripts.
Accordingly, individuals need enhanced emotional and critical digital literacy to distinguish when AI contributes to well-being and when it undermines self-regulation [45,46]. Well-being programs, both personal and institutional, should incorporate:
- Training in recognizing synthetic empathy and its limits [5,28].
- Boundaries for digital companionship, preventing excessive reliance on simulated emotional support.
- Strategies for conscious digital disconnection, especially for users of personalized or always-on systems.
This aligns with the psychosocial–relational dimension of Figure 2, which emphasizes the need to preserve autonomy and authentic emotional experience.
5.2. Implications for Organizations: Human-Centered Digital Ecosystems
Across organizational studies, AI adoption improves efficiency but can simultaneously erode professional identity, weaken belonging, or induce digital fatigue [7,18,19,40]. The findings show that employee well-being emerges from the interplay between technological configurations and relational climates.
Thus, organizations must develop human-centered digital ecosystems, where AI enhances—not replaces—empathy, trust, and meaningful interaction. This includes:
- Training leaders in conscious digital leadership, integrating emotional competencies with ethical evaluation of AI use [47].
- Designing workflows that avoid emotional over-automation, ensuring space for human deliberation and interpersonal connection.
- Monitoring for algorithmic pressure, surveillance perceptions, and relational erosion, risks repeatedly identified in 2020–2025 evidence.
Organizations should apply the technological–structural level of the conceptual model, ensuring that systems are implemented with relational sensitivity and ethical safeguards [48].
5.3. Implications for Public Institutions and Policymakers: Emotional Equity and Digital Rights
Empirical findings suggest that AI benefits are unevenly distributed and may reproduce or amplify emotional inequalities, particularly in settings with limited digital literacy or weak regulatory frameworks [11,16]. In response, public policy must incorporate emotional well-being as a core criterion in AI governance.
Priority actions include:
- Algorithmic transparency requirements that reveal how emotional data are processed.
- Affective data protection policies that safeguard users from unintended emotional exposure [8,35].
- Regulations governing therapeutic, educational, and service-oriented AI, ensuring that systems do not replace human support where relational sensitivity is essential.
- Digital inclusion strategies that reduce cultural and technological gaps [1].
This corresponds to the ethical–existential level of the model, ensuring dignity, safety, and affective equity.
5.4. Implications for Healthcare and Education Systems: Responsible Emotional Mediation
Evidence from mental health and education demonstrates that AI can support diagnosis, monitoring, and emotional accompaniment [3,30,31,36]. However, these systems can also generate depersonalization, emotional outsourcing, or diminished spontaneity [4,7].
Therefore, institutions should:
- Integrate impact assessment protocols that evaluate AI-mediated emotional risks, such as dependency, reduced introspection, or decreased belonging.
- Use AI tools as adjuncts—not replacements—for human professionals, reinforcing hybrid models of care [12].
- Ensure that adaptive educational systems incorporate emotion-sensitive modules that support creativity, intrinsic motivation, and human connection [1,34].
This reinforces the need for relational designs that maintain the centrality of human sensitivity.
5.5. Implications at the Community and Societal Level: Toward Emotionally Sustainable Digital Cultures
The studies reviewed highlight the emergence of new digital communities and affective networks shaped by AI-mediated interactions. These environments can foster belonging but also create misinformation, comparison pressures, or emotional fragmentation [9,15].
To build emotionally sustainable technological cultures, communities should promote:
- Collective emotional literacy, enabling shared reflection on technology’s role in shaping feelings and relationships.
- Community norms that prioritize empathy, cooperation, and ethical digital coexistence.
- Practices that recognize both the opportunities and vulnerabilities inherent in AI-mediated social life.
Ultimately, societal well-being requires developing a mature emotional culture capable of integrating AI without sacrificing human meaning, dignity, and relational depth [49].
5.6. Closing Synthesis
Together, these practical implications demonstrate that the challenges and opportunities of AI-mediated emotional well-being are best addressed through integrated, multilevel, and ethically grounded strategies, consistent with the conceptual model developed in this study. Emotional well-being in AI-driven societies depends on:
- Human autonomy;
- Relational authenticity;
- Empathic leadership;
- Emotionally aware policy;
- Responsible technological design.
Only through such coordinated efforts can AI contribute to healthier, fairer, and more human-centered emotional ecosystems.
6. Limitations and Future Research Lines
This study is framed within the historical period from 2020 to 2025, a phase characterized by unprecedented acceleration in the development and adoption of artificial intelligence (AI) technologies. Although this time frame is suitable to capture the most recent transformations in the relationship between AI and emotional well-being, it also entails an inherent limitation: the speed of technological change exceeds the academic capacity to construct stable theoretical frameworks. For this reason, the findings should be interpreted as an analytical snapshot of a rapidly evolving phenomenon rather than as a definitive synthesis.
A second limitation derives from the theoretical and reflective nature of the adopted approach. This article does not present direct empirical data; instead, it offers a critical review supported by interdisciplinary scientific literature. While this strategy allows for identifying emerging trends, psychosocial tensions, and ethical dilemmas, it limits the ability to quantify the effects of AI on specific variables such as anxiety, empathy, or subjective well-being. Consequently, future studies should incorporate mixed methodologies—quantitative, qualitative, and experimental—to triangulate empirical and conceptual evidence and deepen our understanding of how algorithmic mediation influences emotional health.
Contextual heterogeneity represents another significant limitation. The use, acceptance, and impact of AI tools—such as mHealth systems, virtual assistants, or therapeutic chatbots—vary widely across cultural, socioeconomic, and technological settings. These differences shape experiences of digital well-being and restrict the generalizability of results. Therefore, it is essential to advance toward comparative and cross-cultural research that examines how educational, social, and economic factors modulate the relationship between AI and emotional well-being. Particular attention should be given to perspectives from the Global South, where infrastructure gaps and regulatory limitations create emotionally vulnerable scenarios that remain underexplored [50,51,52].
In addition, most of the reviewed studies rely on short-term observations or controlled environments. This lack of longitudinal evidence limits the understanding of how sustained interaction with AI reshapes attachment patterns, emotional autonomy, or perceptions of social support over time. Future research should therefore examine the long-term evolution of these dynamics to distinguish between healthy emotional adaptation and affective dependency on intelligent systems.
The findings of this study—summarized in the three-level conceptual model (technological–structural, psychosocial–relational, and ethical–existential)—suggest several specific avenues for future research:
- Mechanisms of Algorithmic Affectivity: Future studies should investigate how AI systems detect, categorize, and modulate emotions, and how these mechanisms influence the psychosocial–relational level identified in this review. Understanding these processes is essential to prevent automated emotional manipulation and to refine ethical principles of fairness and transparency.
- Digital Emotional Identity and the Construction of the Algorithmic Self: Results show that emotional expression in digital environments is increasingly shaped by algorithmic infrastructures. Research should therefore examine how individuals build, perform, and negotiate emotions in mediated contexts and how these processes affect authenticity, trust, and identity formation over time.
- AI-Assisted Emotional Regulation and the Autonomy–Dependence Continuum: Given the ambivalence observed in the literature—AI can either enhance self-regulation or foster dependence—future research should specify under which conditions AI-assisted regulation supports emotional autonomy and when it risks substituting or diminishing introspective capacities.
- Longitudinal Impacts of Hybrid Emotional Ecosystems: The conceptual model emphasizes the circular dynamic through which humans feed algorithms and algorithms shape human emotional experience. Longitudinal research is needed to understand how this loop evolves, particularly in domains highlighted in the findings: mental health, organizational well-being, and education.
- Cultural and Structural Moderators in AI-mediated Well-being: Considering the contextual variability documented in the results, cross-cultural studies should identify structural moderators—such as technological literacy, socioeconomic inequality, and cultural norms—that condition whether AI operates as a facilitator or disruptor of emotional well-being.
- Ethical Architecture of Emotional Data: As several studies highlighted persistent dilemmas regarding affective privacy, algorithmic opacity, and emotional dignity, future research should develop frameworks capable of evaluating how emotional data are collected, interpreted, and used. This includes exploring regulatory strategies inspired by standards such as IEEE 7010 [33].
- Relational Dynamics in Human–AI Emotional Interaction: The review shows tensions between simulated empathy, connected loneliness, and absent presence. Future studies should analyze these relational paradoxes in depth, examining how empathy, trust, and perceived support are redistributed within hybrid environments where human and non-human actors coexist.
Altogether, these lines of research advance a more specific, contextualized, and conceptually grounded agenda for future studies on emotional well-being in the age of artificial intelligence. The central challenge will be to construct integrative models that capture not only the technological effects of AI but also the cultural, moral, and affective dimensions that shape human experience in increasingly algorithmic societies.
7. Conclusions
The analysis of literature published between 2020 and 2025 shows that artificial intelligence (AI) has become a central mediator of contemporary emotional life, not simply by expanding technological capabilities but by reshaping how individuals perceive, regulate, and express their emotions across clinical, organizational, and educational contexts. The findings confirm that AI is no longer a peripheral tool but an emergent relational and affective agent—one that participates in the co-construction of well-being, vulnerability, and emotional meaning in digital environments.
The main contribution of this study lies in proposing a holistic and theoretically integrated understanding of AI-mediated emotional well-being, articulated through the three-level conceptual model (technological–structural, psychosocial–relational, and ethical–existential). This model highlights that emotional well-being is neither a purely individual state nor a purely technological outcome; instead, it emerges from the dynamic, circular, and bidirectional interactions between human experiences and algorithmic systems. The reviewed evidence shows that AI shapes emotional experience by organizing infrastructures of support, mediating relational dynamics, and generating new ethical dilemmas around autonomy, authenticity, and affective privacy.
Across domains, the findings reveal that AI functions as an ambivalent catalyst. It enables greater accessibility to psychological resources [30,31], fosters creativity and adaptive learning [1,34], and supports emotional regulation and monitoring [12]. Yet, it also introduces vulnerabilities: affective dependence [6], depersonalization in work and care contexts [5,19], algorithmic fatigue, and forms of emotional outsourcing that may erode spontaneity or reflective awareness [4,7]. This duality underscores that digital well-being depends not only on system performance but on the ethical and relational design choices embedded in AI.
A central insight of this reflection is that fragmented approaches to AI and well-being are insufficient. Clinical, organizational, and educational studies tend to conceptualize emotional well-being in isolation, often focusing either on functional benefits or on risks of dehumanization. By integrating these perspectives into a unified psychosocial and ethical framework, this article demonstrates that emotional well-being in the algorithmic age is a hybrid construct, shaped by technological infrastructures, relational interactions, and moral expectations. This integration responds directly to the knowledge gap identified in the Introduction section and offers a conceptual foundation for understanding the paradoxes that characterize AI-mediated emotional life: increased support alongside increased dependency, enhanced personalization alongside diminished authenticity.
The findings also reinforce the importance of interdisciplinary dialogue, connecting psychology, AI ethics, human–computer interaction, sociology, and organizational behavior. Understanding AI as part of an emerging emotional ecosystem requires approaches capable of examining not only emotional outcomes but also the cultural, relational, and moral conditions that make such outcomes possible. The role of AI in emotional life cannot be fully understood through technological determinism or moral alarmism; instead, it demands a critical, contextual, and relational perspective.
Ultimately, this article argues that the key question for the future is not whether AI improves or harms emotional well-being, but under what conditions it contributes to autonomy, authenticity, and psychological flourishing. Sustainable digital well-being requires designing technologies with human intentionality, fostering ethical and emotionally conscious leadership, reducing structural inequities in access and literacy, and strengthening individuals’ capacity for reflective, autonomous engagement with intelligent systems.
In conclusion, preserving the human dimension in the algorithmic age does not entail rejecting AI but learning to coexist with it responsibly—ensuring that technological innovation serves emotional life rather than shaping it uncritically. AI can become a vehicle for emotional development, support, and resilience, but this potential will only be realized if societies cultivate the ethical, cultural, and relational maturity necessary to guide its evolution. The challenge is collective: to design, regulate, and inhabit AI-mediated environments in ways that protect emotional dignity and enhance, rather than diminish, what fundamentally defines us as human beings—the capacity to feel, understand, and care for others.
Author Contributions
Conceptualization, C.S.-T., J.-A.C.-M. and E.T.-P.; methodology, C.S.-T., J.-A.C.-M. and E.T.-P.; validation, C.S.-T.; formal analysis, C.S.-T.; investigation, C.S.-T., J.-A.C.-M. and E.T.-P.; resources, C.S.-T., J.-A.C.-M. and E.T.-P.; data curation, C.S.-T.; writing—original draft preparation, C.S.-T.; writing—review and editing, C.S.-T., J.-A.C.-M. and E.T.-P.; supervision, J.-A.C.-M. and E.T.-P.; project administration, C.S.-T., J.-A.C.-M. and E.T.-P.; funding acquisition, J.-A.C.-M. and E.T.-P. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not Applicable.
Informed Consent Statement
Not Applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| AI | artificial intelligence |
References
- Lin, H.; Chen, Q. Artificial intelligence (AI)-integrated educational applications and college students’ creativity and academic emotions: Students and teachers’ perceptions and attitudes. BMC Psychol. 2024, 12, 487. [Google Scholar] [CrossRef] [PubMed]
- Ozmen Garibay, O.; Winslow, B.; Andolina, S.; Antona, M.; Bodenschatz, A.; Coursaris, C.; Xu, W. Six human-centered artificial intelligence grand challenges. Int. J. Hum.–Comput. Interact. 2023, 39, 391–437. [Google Scholar] [CrossRef]
- Pataranutaporn, P.; Danry, V.; Leong, J.; Punpongsanon, P.; Novy, D.; Maes, P.; Sra, M. AI-generated characters for supporting personalized learning and well-being. Nat. Mach. Intell. 2021, 3, 1013–1022. [Google Scholar] [CrossRef]
- Tuomi, I. Artificial intelligence, 21st century competences, and socio-emotional learning in education: More than high-risk? Eur. J. Educ. 2022, 57, 601–619. [Google Scholar] [CrossRef]
- Sedlakova, J.; Trachsel, M. Conversational artificial intelligence in psychotherapy: A new therapeutic tool or agent? Am. J. Bioeth. 2023, 23, 4–13. [Google Scholar] [CrossRef]
- Uysal, E.; Alavi, S.; Bezençon, V. Trojan horse or useful helper? A relationship perspective on artificial intelligence assistants with humanlike features. J. Acad. Mark. Sci. 2022, 50, 1153–1175. [Google Scholar] [CrossRef]
- Velastegui-Hernandez, D.C.; Rodriguez-Pérez, M.L.; Salazar-Garcés, L.F. Impact of Artificial Intelligence on learning behaviors and psychological well-being of college students. Salud Cienc. Y Tecnol.—Ser. De Conf. 2023, 2, 582. [Google Scholar] [CrossRef]
- Sood, M.S.; Gupta, A. The Impact of Artificial intelligence on Emotional, Spiritual and Mental wellbeing: Enhancing or Diminishing Quality of Life. Am. J. Psychiatr. Rehabil. 2025, 28, 298–312. [Google Scholar] [CrossRef]
- Wang, Y.C.; Uysal, M. Artificial intelligence-assisted mindfulness in tourism, hospitality, and events. Int. J. Contemp. Hosp. Manag. 2024, 36, 1262–1278. [Google Scholar] [CrossRef]
- Lee, D.; Yoon, S.N. Application of artificial intelligence-based technologies in the healthcare industry: Opportunities and challenges. Int. J. Environ. Res. Public Health 2021, 18, 271. [Google Scholar] [CrossRef]
- Moghayedi, A.; Michell, K.; Awuzie, B.; Adama, U.J. A comprehensive analysis of the implications of artificial intelligence adoption on employee social well-being in South African facility management organizations. J. Corp. Real Estate 2024, 26, 237–261. [Google Scholar] [CrossRef]
- Nashwan, A.J.; Gharib, S.; Alhadidi, M.; El-Ashry, A.M.; Alamgir, A.; Al-Hassan, M.; Khedr, M.A.; Dawood, S.; Abufarsakh, B. Harnessing artificial intelligence: Strategies for mental health nurses in optimizing psychiatric patient care. Issues Ment. Health Nurs. 2023, 44, 1020–1034. [Google Scholar] [CrossRef]
- Alhuwaydi, A.M. Exploring the role of artificial intelligence in mental healthcare: Current trends and future directions—A narrative review for a comprehensive insight. Risk Manag. Healthc. Policy 2024, 17, 1339–1348. [Google Scholar] [CrossRef] [PubMed]
- Gual-Montolio, P.; Jaén, I.; Martínez-Borba, V.; Castilla, D.; Suso-Ribera, C. Using artificial intelligence to enhance ongoing psychological interventions for emotional problems in real-or close to real-time: A systematic review. Int. J. Environ. Res. Public Health 2022, 19, 7737. [Google Scholar] [CrossRef] [PubMed]
- Shahzad, M.F.; Xu, S.; Lim, W.M.; Yang, X.; Khan, Q.R. Artificial intelligence and social media on academic performance and mental well-being: Student perceptions of positive impact in the age of smart learning. Heliyon 2024, 10, e29523. [Google Scholar] [CrossRef]
- Makridis, C.A.; Mishra, S. Artificial intelligence as a service, economic growth, and well-being. J. Serv. Res. 2022, 25, 505–520. [Google Scholar] [CrossRef]
- Shaikh, F.; Afshan, G.; Anwar, R.S.; Abbas, Z.; Chana, K.A. Analyzing the impact of artificial intelligence on employee productivity: The mediating effect of knowledge sharing and well-being. Asia Pac. J. Hum. Resour. 2023, 61, 794–820. [Google Scholar] [CrossRef]
- Xu, G.; Xue, M.; Zhao, J. The relationship of artificial intelligence opportunity perception and employee workplace well-being: A moderated mediation model. Int. J. Environ. Res. Public Health 2023, 20, 1974. [Google Scholar] [CrossRef]
- Cramarenco, R.E.; Burcă-Voicu, M.I.; Dabija, D.C. The impact of artificial intelligence (AI) on employees’ skills and well-being in global labor markets: A systematic review. Oeconomia Copernic. 2023, 14, 731–767. [Google Scholar] [CrossRef]
- Tang, P.M.; Koopman, J.; Mai, K.M.; De Cremer, D.; Zhang, J.H.; Reynders, P.; Chen, I. No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. J. Appl. Psychol. 2023, 108, 1766. [Google Scholar] [CrossRef]
- Thakkar, A.; Gupta, A.; De Sousa, A. Artificial intelligence in positive mental health: A narrative review. Front. Digit. Health 2024, 6, 1280235. [Google Scholar] [CrossRef] [PubMed]
- Dhimolea, T.K.; Kaplan-Rakowski, R.; Lin, L. Supporting social and emotional well-being with artificial intelligence. In Bridging Human Intelligence and Artificial Intelligence; Springer International Publishing: Cham, Switzerland, 2022; pp. 125–138. [Google Scholar] [CrossRef]
- Tornero-Costa, R.; Martinez-Millana, A.; Azzopardi-Muscat, N.; Lazeri, L.; Traver, V.; Novillo-Ortiz, D. Methodological and quality flaws in the use of artificial intelligence in mental health research: Systematic review. JMIR Ment. Health 2023, 10, e42045. [Google Scholar] [CrossRef] [PubMed]
- Booth, A.; Martyn-St James, M.; Clowes, M.; Sutton, A. Systematic Approaches to a Successful Literature Review, 2nd ed.; Sage Publications: Thousand Oaks, CA, USA, 2021. [Google Scholar]
- Torraco, R.J. Writing integrative literature reviews: Using the past and present to explore the future. Hum. Resour. Dev. Rev. 2016, 15, 404–428. [Google Scholar] [CrossRef]
- Snyder, H. Literature review as a research methodology: An overview and guidelines. J. Bus. Res. 2019, 104, 333–339. [Google Scholar] [CrossRef]
- Bankins, S.; Ocampo, A.C.; Marrone, M.; Restubog, S.L.D.; Woo, S.E. A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. J. Organ. Behav. 2024, 45, 159–182. [Google Scholar] [CrossRef]
- Beg, M.J.; Verma, M.; Vishvak Chanthar, K.M.M.; Verma, M.K. Artificial intelligence for psychotherapy: A review of the current state and future directions. Indian J. Psychol. Med. 2025, 47, 314–325. [Google Scholar] [CrossRef]
- Cabrera, J.; Loyola, M.S.; Magaña, I.; Rojas, R. Ethical dilemmas, mental health, artificial intelligence, and llm-based chatbots. In International Work-Conference on Bioinformatics and Biomedical Engineering; Springer Nature: Cham, Switzerland, 2023; pp. 313–326. [Google Scholar] [CrossRef]
- Li, H.; Zhang, R.; Lee, Y.C.; Kraut, R.E.; Mohr, D.C. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digit. Med. 2023, 6, 236. [Google Scholar] [CrossRef]
- Alam, M.M.D.; Alam, M.Z.; Rahman, S.A.; Taghizadeh, S.K. Factors influencing mHealth adoption and its impact on mental well-being during COVID-19 pandemic: A SEM-ANN approach. J. Biomed. Inform. 2021, 116, 103722. [Google Scholar] [CrossRef]
- Olawade, D.B.; Wada, O.Z.; Odetayo, A.; David-Olawade, A.C.; Asaolu, F.; Eberhardt, J. Enhancing mental health with Artificial Intelligence: Current trends and future prospects. J. Med. Surg. Public Health 2024, 3, 100099. [Google Scholar] [CrossRef]
- Schiff, D.; Ayesh, A.; Musikanski, L.; Havens, J.C. IEEE 7010: A new standard for assessing the well-being implications of artificial intelligence. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Virtual, 11–14 October 2020; IEEE: New York, NY, USA, 2020; pp. 2746–2753. [Google Scholar] [CrossRef]
- Dai, Y.; Chai, C.S.; Lin, P.Y.; Jong, M.S.Y.; Guo, Y.; Qin, J. Promoting students’ well-being by developing their readiness for the artificial intelligence age. Sustainability 2020, 12, 6597. [Google Scholar] [CrossRef]
- Jeyaraman, M.; Balaji, S.; Jeyaraman, N.; Yadav, S. Unraveling the ethical enigma: Artificial intelligence in healthcare. Cureus 2023, 15, e43262. [Google Scholar] [CrossRef]
- Vistorte, A.O.R.; Deroncele-Acosta, A.; Ayala, J.L.M.; Barrasa, A.; López-Granero, C.; Martí-González, M. Integrating artificial intelligence to assess emotions in learning environments: A systematic literature review. Front. Psychol. 2024, 15, 1387089. [Google Scholar] [CrossRef] [PubMed]
- Murugesan, U.; Subramanian, P.; Srivastava, S.; Dwivedi, A. A study of artificial intelligence impacts on human resource digitalization in industry 4.0. Decis. Anal. J. 2023, 7, 100249. [Google Scholar] [CrossRef]
- Mendy, J.; Jain, A.; Thomas, A. Artificial intelligence in the workplace–challenges, opportunities and HRM framework: A critical review and research agenda for change. J. Manag. Psychol. 2025, 40, 517–538. [Google Scholar] [CrossRef]
- Alhwaiti, M. Acceptance of artificial intelligence application in the post-COVID ERA and its impact on faculty members’ occupational well-being and teaching self-efficacy: A path analysis using the utaut 2 model. Appl. Artif. Intell. 2023, 37, 2175110. [Google Scholar] [CrossRef]
- Malik, N.; Tripathi, S.N.; Kar, A.K.; Gupta, S. Impact of artificial intelligence on employees working in industry 4.0 led organizations. Int. J. Manpow. 2022, 43, 334–354. [Google Scholar] [CrossRef]
- Prentice, C.; Dominique Lopes, S.; Wang, X. Emotional intelligence or artificial intelligence–an employee perspective. J. Hosp. Mark. Manag. 2020, 29, 377–403. [Google Scholar] [CrossRef]
- Santiago-Torner, C.; Jiménez-Pérez, Y.; Tarrats-Pons, E. Ethical climate, intrinsic motivation, and affective commitment: The impact of depersonalization. Eur. J. Investig. Health Psychol. Educ. 2025, 15, 55. [Google Scholar] [CrossRef]
- Chin, H.; Song, H.; Baek, G.; Shin, M.; Jung, C.; Cha, M.; Choi, J.; Cha, C. The potential of chatbots for emotional support and promoting mental well-being in different cultures: Mixed methods study. J. Med. Internet Res. 2023, 25, e51712. [Google Scholar] [CrossRef]
- Denecke, K.; Abd-Alrazaq, A.; Househ, M. Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges. In Multiple Perspectives on Artificial Intelligence in Healthcare. Lecture Notes in Bioengineering; Househ, M., Borycki, E., Kushniruk, A., Eds.; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
- Santiago-Torner, C. Clima ético benevolente y autoeficacia laboral. La mediación de la motivación intrínseca y la moderación del compromiso afectivo en el sector eléctrico colombiano. Lect. Econ. 2024, 101, 235–269. [Google Scholar] [CrossRef]
- Santiago-Torner, C. Creativity and emotional exhaustion in virtual work environments: The ambiguous role of work autonomy. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 2087–2100. [Google Scholar] [CrossRef]
- Santiago-Torner, C. Ethical leadership and creativity in employees with University education: The moderating effect of high intensity telework. Intang. Cap. 2023, 19, 393–414. [Google Scholar] [CrossRef]
- Santiago-Torner, C. Teleworking and emotional exhaustion in the Colombian electricity sector: The mediating role of affective commitment and the moderating role of creativity. Intang. Cap. 2023, 19, 207–258. [Google Scholar] [CrossRef]
- Santiago-Torner, C.; Corral-Marfil, J.A.; Jiménez-Pérez, Y.; Tarrats-Pons, E. Impact of ethical leadership on autonomy and self-efficacy in virtual work environments: The disintegrating effect of an egoistic climate. Behav. Sci. 2025, 15, 95. [Google Scholar] [CrossRef]
- Santiago-Torner, C.; Corral-Marfil, J.A.; Tarrats-Pons, E. The relationship between ethical leadership and emotional exhaustion in a virtual work environment: A moderated mediation model. Systems 2024, 12, 454. [Google Scholar] [CrossRef]
- Santiago-Torner, C. Teletrabajo y clima ético: El efecto mediador de la autonomía laboral y del compromiso organizacional. Rev. Metod. Cuant. Econ. Empres. 2023, 36, 1–23. [Google Scholar] [CrossRef]
- Santiago-Torner, C. Relación entre liderazgo ético y motivación intrínseca: El rol mediador de la creatividad y el múltiple efecto moderador del compromiso de continuidad. Rev. Metod. Cuant. Econ. Empres. 2023, 36, 1–27. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.

