Next Article in Journal
Temperature State Awareness-Based Energy-Saving Routing Protocol for Wireless Body Area Network
Previous Article in Journal
An Empirical Evaluation of Ensemble Models for Python Code Smell Detection
Previous Article in Special Issue
Effects of a 12-Week Semi-Immersive Virtual Reality-Based Exercise Program on the Quality of Life of Older Adults Across Different Age Groups: A Randomized Controlled Trial
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Critical Factors in Young People’s Use and Non-Use of AI Technology for Emotion Regulation: A Pilot Study

1
School of Design, South China University of Technology, Guangzhou 510006, China
2
School of Business and Administration, Hong Kong Metropolitan University, Hong Kong, China
3
Shenzhen Research Institute, City University of Hong Kong, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(13), 7476; https://doi.org/10.3390/app15137476
Submission received: 5 June 2025 / Revised: 28 June 2025 / Accepted: 2 July 2025 / Published: 3 July 2025
(This article belongs to the Special Issue Digital Health, Mobile Technologies and Future of Human Healthcare)

Abstract

Emotional difficulties are increasingly prevalent amongst young people, yet the use of AI technology for emotion regulation remains limited. This study aimed to identify young people’s attitudes toward AI technology for emotion regulation and to analyse the factors influencing their decision to use or not use AI technology. Forty participants from China, comprising twenty males and twenty females, with a mean age of twenty-five, took part in the study. Data were collected through semi-structured face-to-face interviews and were analysed using NVivo 11 software. Grounded theory techniques and a three-stage coding approach were used to categorise the data. The grounded theory model demonstrated that user behaviours are influenced by three contextual factors: personal, technological and environmental contexts. Key influencing factors for user behaviours include fulfilling utilitarian, hedonic and social value needs such as perceived usefulness, ease of use, trust, positive emotions, interest, social perception, high value, convenience and privacy protection. This study offered theoretical insights and practical recommendations for designing and developing AI technology aimed at emotion regulation in youth populations.

1. Introduction

Recent research has highlighted a growing mental health crisis amongst young people globally, with increasing rates of anxiety and depression [1]. Young people are one of the groups most affected by mental health challenges, often feeling overwhelmed by uncertainties about their future, academic burdens and strained relationships [2]. These emotional struggles are further compounded by societal and familial expectations that push young people to excel academically, secure employment in a competitive job market and conform to the influences of social networks [3]. The combination of these pressures can lead young people to experience intense self-doubt, diminished self-worth and emotional distress, ultimately contributing to mood swings and mental health issues [4]. Given the growing prevalence of these emotional challenges, effective emotion regulation has become essential for helping young people manage their mental well-being.
Emotion regulation refers to the psychological process by which individuals use physiological, cognitive and behavioural strategies to manage their emotions in response to internal and external demands [5]. This process is crucial in helping individuals cope with negative emotions and maintain emotional stability when facing external pressures. Importantly, emotion regulation during adolescence and early adulthood reflects not only immediate coping behaviour but also ongoing neurobiological and social development. The maturation of the brain during this period supports improvements in cognitive control and decision-making [6]. In parallel, relationships with parents and peers influence young people’s emotion regulation and adjustment [7]. For young people, emotion regulation depends largely on their ability to self-regulate and select appropriate tools or products to assist in managing their emotions. These developmental and contextual factors likely shape how young people experience emotional challenges and how they perceive and engage with emotion regulation products. Because emotional difficulties increasingly affect daily life, research on emotion regulation has expanded across various disciplines, including psychology, neuroscience, digital health and social science [8]. Current research on emotion regulation focuses on the neural mechanisms behind emotion regulation through neuroimaging techniques such as functional magnetic resonance imaging [9], examining the role of emotion regulation in mental health disorders such as depression and anxiety [10] and investigating resilience-building tools like positive thinking and emotion tracking applications [11,12]. With the increasing impact of emotional challenges on people’s well-being, the need for practical measures to support emotional support strategies has led to the integration of advanced technologies. In particular, the application of artificial intelligence (AI) technology has begun to play an increasingly prominent role in emotion regulation [13].
AI technology refers to machines that can perform human behaviours or create computer programs [14]. The role of AI technology in emotion regulation is mainly to identify, regulate and analyse emotions [15,16]. Integrating AI technology into emotion regulation products transforms the landscape of emotional well-being solutions, offering personalised and adaptive interventions to help individuals manage their emotions effectively [17]. AI technology in emotion regulation can be grouped into three main categories. First, AI technology is used for emotion detection, where AI-driven interfaces, applications and chatbots adapt their responses based on emotional cues, providing empathetic and supportive interactions [18]. For instance, Woebot, an AI-powered therapeutic chatbot, helps users identify and manage their emotions through cognitive behavioural therapy-based techniques [19]. Second, AI technology combined with wearable devices track and analyse mood data, predicting mood fluctuations and helping manage mood disorders [20]. Spire Stone monitors physiological signals and uses AI algorithms to identify patterns and assist users in managing stress and anxiety [21]. Third, AI-powered virtual reality systems and therapeutic games help individuals build and refine their emotion regulation skills [22]. The Virtual Reality Exposure Therapy system, for example, provides controlled environments where users can confront anxiety-inducing situations and regulate their emotional responses through gradual exposure [23]. By analysing large amounts of data, AI technology allows emotion regulation products to adapt to individual needs and offer targeted interventions [24]. Recent advancements in AI-based emotion recognition, such as electroencephalography-driven models combining convolutional neural networks and long short-term memory architectures, offer promising avenues for integrating accurate emotion detection into emotion regulation tools [25]. As a result, AI technology has the potential to improve emotion regulation, providing scalable and accessible solutions that enable individuals to manage their emotions more effectively.
Despite these advancements, there is a notable gap in attitudinal and behavioural research regarding the use of AI technology in emotion regulation. The application of AI technology in emotion regulation products and services remains limited, and users’ attitudes toward the use of AI technology for emotion regulation are not well understood. This lack of insight underscores the need for further research in this area. To address these research gaps, this study aimed to identify young people’s attitudes toward AI technology for emotion regulation and to analyse the factors influencing their decision to use or not use AI technology. The study focused on two key areas: young people’s attitudes toward AI technology for emotion regulation and reasons behind their use or non-use of AI technology for emotion regulation. Based on the findings, the study developed a grounded theoretical model to explain the use or non-use of AI technology for emotion regulation. Additionally, the study offered practical recommendations to guide the design and development of AI technology for emotion regulation, encouraging their adoption amongst young users and promoting a greater focus on emotion regulation within this demographic.

2. Materials and Methods

2.1. Research Methods

The research employed a semi-structured interview to collect qualitative data on young people’s acceptance of AI technology for emotion regulation. A qualitative methodology can delve into the reasons behind respondents’ answers and gain a deep understanding of their insights [26,27]. Qualitative methods allow individuals to express their attitudes and experiences with AI technology for emotion regulation in their own words, thereby facilitating a rich interpretation of the phenomenon under study [28,29]. Amongst the most commonly used qualitative techniques are focus groups and personal interviews. Given the individualised and private nature of emotion regulation, face-to-face personal interviews were selected for this study. This approach has been widely used across various research areas, such as safety [30,31,32], management [33], medicine [34] and mental health [35]. Studies have shown that interviews effectively capture detailed individual experiences and provide valuable contextual information, aiding the interpretation of data at the personal level [36,37]. Therefore, this study employed individual face-to-face interviews to explore young people’s attitudes toward using AI technology for emotion regulation and to identify the factors influencing their use or non-use of AI technology for emotion regulation. The coding and analysis followed Glaser’s grounded theory approach, which uses qualitative statistics to develop concepts or theories based on data rather than testing hypotheses derived from the existing literature [38]. Grounded theory was chosen, because it provides a rigorous, systematic approach to developing a theory that is firmly rooted in participants’ perspectives and the empirical data itself rather than relying on pre-existing frameworks. This approach allowed us to move beyond describing participants’ views to integrating categories and identifying relationships between personal, technological and environmental factors, ultimately constructing an explanatory model that reflects how these factors interact to shape behaviour. Given the exploratory nature of the study and the partly structured design of the interviews, grounded theory offered the flexibility and depth needed to capture emergent themes and build a theory that meaningfully reflects the complexity of this under-researched domain. Grounded theory has been successfully employed to explore various phenomena, including factors influencing user behaviour and mental health [39]. The ethical approval for this study was provided by the Institutional Review Board of the South China University of Technology. The following sections provide detailed information about the measurements, interview questions, participants and data analysis.

2.2. Interview Questions

The interview questions were designed to provide a comprehensive framework to better understand the reasons behind the use and non-use of AI technology for emotion regulation amongst young people. The interview questionnaire consisted of three sections: basic demographic information, attitudes toward the use of AI technology for emotion regulation and reasons for the use and non-use of AI technology for emotion regulation. To ensure consistency across the interviews, the researcher developed a semi-structured interview guide, which helped ensure that all relevant information was gathered. Five participants from a pilot study provided feedback to refine the interview questions, ensuring they were clear and easily understood by young people. Key questions included “What do you like and dislike about AI-based emotion regulation tools?”, “What do you consider to be the strengths and weaknesses of the AI-based emotion regulation products you currently use?”, “What are the disadvantages of the emotion regulation you currently use?”, “What form of emotion regulation do you most plan to use in the future?”, “What are the main reasons for using AI technology for emotion regulation?” and “What are the main reasons for not using AI technology for emotion regulation?”. Each interview was conducted in Mandarin and lasted approximately 50 min.

2.3. Procedure

Before the interviews, all participants signed an informed consent form and were explicitly informed of their right to withdraw from the study without providing a reason. To minimise potential response bias, participants were assured that they could withdraw from the interview at any time, and all information collected would be kept confidential and anonymous. During the interviews, participants were encouraged to speak informally, allowing them to express their thoughts and experiences freely. With the participants’ consent, all individual interviews were audio-recorded. The recordings were subsequently transcribed verbatim for data analysis.

2.4. Participants

The researcher distributed a recruitment announcement outlining the study’s purpose and the interview process. Participants were recruited based on the following inclusion criteria: individuals aged between 22 and 32, residing in China and possessing a basic understanding of, or prior exposure to, AI-based tools or technologies relevant to emotion regulation. To minimise confounding factors and ensure participants could meaningfully reflect on the study topic, individuals with diagnosed severe mental health conditions or communication impairments were excluded. Forty participants, comprising 20 males and 20 females with a mean age of 25, took part in the study. According to Hennink and Kaiser [40], the typical sample size for qualitative research ranges from 15 to 60 participants. The sample size (40) in this study was within the acceptable range, as reviewed by Mason [41], who found that the mean sample size in 560 qualitative studies was 31. Most participants (80%) had a bachelor’s degree or higher, with 75% being science and engineering students. Additionally, 62.5% of the respondents were single. The interviews were conducted by a researcher trained in qualitative interviewing and grounded theory analysis. No prior relationship existed between the interviewer and participants. Interviews were conducted in private rooms to ensure confidentiality, with no non-participants present. While the initial sampling aimed to ensure demographic diversity and variation in AI technology use experience, theoretical saturation was monitored during the analysis. No new codes or themes emerged after approximately 35 interviews, indicating that theoretical saturation was achieved. The remaining interviews served to confirm the stability and consistency of the categories. Throughout the coding and analysis process, memo writing to document reflections, analytic decisions and the development of categories, supporting the integration of codes into higher-level concepts and the construction of the theoretical model, were carefully conducted with experienced researchers. Constant comparison was systematically applied, examining data within and across the interviews and comparing emerging codes and categories to ensure that the analysis remained grounded in the participants’ narratives and captured diverse perspectives. The reporting of this qualitative study followed the Consolidated Criteria for Reporting Qualitative Research guidelines to enhance transparency and replicability [42].

2.5. Analysis

The audio-recorded interviews were transcribed verbatim and analysed using NVivo 11. All transcripts were anonymised to protect participant identity. The analysis followed a three-stage coding process consisting of open, axial and selective coding [43]. In the open coding stage, a detailed line-by-line examination of the transcripts was conducted, breaking the data into units of meaning and assigning initial codes [26]. For example, one participant’s statement “I use AI technology for emotion regulation when I feel stressed and overwhelmed” was coded as “usefulness” and “regulating emotional states” [44]. Another participant’s remark “AI tools don’t really understand my feelings; they just give generic suggestions” was coded as “emotions hard to understand” and “lack of trust”. This process enabled a detailed understanding of participants’ experiences with emotion regulation and their perceptions on using AI technology in this context. Through constant comparisons across interviews, similar codes were grouped in the axial coding phase into higher-level categories [45]. For instance, “emotions hard to understand” and “lack of trust” were integrated into the category “functional barriers”, while “usefulness” and “regulating emotional states” were grouped under “functional outcomes”. Axial coding thus facilitated the development of a conceptual framework by linking causal conditions, contextual factors, intervening variables and consequences [46]. In the selective coding phase, these axial categories contributed to the core constructs that formed the theoretical model: personal factors, technological factors and environmental factors influencing the use or non-use of AI technology for emotion regulation amongst young people [26]. This iterative process allowed the model to be built directly from participants’ narratives, grounded in the data. The coding was initially performed in Mandarin to retain the cultural and emotional nuances expressed by participants. Two researchers with training in qualitative methods independently conducted the initial open coding. They then met regularly to compare codes, resolve discrepancies through discussion and iteratively refine the coding framework. Axial and selective coding were jointly performed using the agreed-upon framework. Although intercoder reliability coefficients such as Cohen’s kappa were not calculated, coding consistency was ensured through intercoder dialogue and consensus-building, which are widely accepted practices in grounded theory research. The coding scheme and representative quotations were subsequently translated into English by bilingual researchers with experience in qualitative research and cross-cultural studies. To further enhance the accuracy of the data, back-translation and team discussions were conducted to resolve any ambiguities and ensure faithful representation of their meaning.

3. Results

3.1. Overview of the Data Coding and Analysis

A coding scheme was developed based on the grounded theory [47] to explain the acceptance of AI technology for emotion regulation amongst young people. In this study, 584 quotations were coded and classified according to the following coding scheme. Table 1 shows the coding scheme and the proportion of each category/subcategory. Of the 584 quotations, 16.25% were coded as attitudes toward using AI technology for emotion regulation, 56.07% as reasons for using AI technology for emotion regulation and 27.68% as reasons not to use AI technology for emotion regulation.

3.2. Attitudes Toward AI Technology for Emotion Regulation

To explore young people’s attitudes toward AI-based emotion regulation technology they currently use, participants were asked questions such as “What do you like and dislike about AI technology for emotion regulation?” and “What do you consider to be the strengths and weaknesses of the AI-based emotion regulation products you currently use?”. Participants’ responses were categorised into three attitudinal types: positive, neutral and negative evaluations. Amongst the coded statements, with 47.24% reflecting positive attitudes, 12.88% were neutral and 39.88% were negative. The results indicated that young people have more positive than negative attitudes toward using AI technology for emotion regulation.
The most frequently cited reasons for positive attitudes were the effectiveness of AI technology in regulating emotional states (44.15%) and the functional value provided by AI technology (38.96%). Respondents consistently emphasised that enhancing positive emotions, reducing negative emotions and maintaining emotional stability are central to their evaluation of AI technology for emotion regulation. These capabilities are essential to promoting emotional well-being and managing psychological challenges. Additionally, respondents noted that they were often able to derive positive emotional experiences from AI technology. This finding aligned with the findings by Shi [48], who reported significant improvements in language retention and emotion regulation amongst users of AI technology. Similarly, Henkel et al. [49] found that achieving intrinsic emotion regulation goals through AI technology could enhance affective well-being. These insights suggested that, when young users perceive AI technology as effective, supportive and aligned with their emotion regulation needs, they are significantly likely to form positive attitudes toward its continued use.
Deriving negative impacts (55.39%) was the leading reason respondents negatively evaluated AI technology for emotion regulation. Other reasons included unresolved issues (15.38%), inconvenience (12.31%), low value (7.69%), complex (4.62%), addiction (3.07) and security and privacy risks (1.54%). Young people expressed concerns that the use of AI technology might worsen their emotional states, lead to addiction, increase their psychological stress or pose privacy and security risks [50]. These risks undermine users’ confidence in the technology, making them question its safety, reliability and appropriateness for personal use. Many participants perceived that AI technology fails to genuinely understand, interpret or respond to the complexity of human emotions. This perceived lack of emotional resonance and empathy contributed to a sense of disconnection and reduced trust, because users doubted that AI technology could provide the sensitivity required for effective emotion regulation. When individuals feel that AI technology may exacerbate rather than alleviate emotional distress or perceive the technology as invasive or insufficiently sensitive to their emotional needs, they are more likely to adopt a cautious or negative stance toward its use.
Neutral responses accounted for 12.88% of the responses and were often characterised by ambivalence regarding AI technology’s effectiveness or contextual usability for emotion regulation. The leading cause of a neutral sentiment was an unsupportive usage environment (52.38%), followed by the lack of evident negative outcomes (19.04%), minimal positive effects (14.29%) and single form (14.29%). For example, some participants stated “There is little positive impact from using AI emotion regulation tools” or “The environment for using such tools is not conducive”. These responses suggested that contextual factors, such as the setting and availability of appropriate support, can significantly shape how AI technology is perceived, regardless of its technical capabilities.

3.3. Reasons for Using AI Technology for Emotion Regulation

This research focused on young people’s reasons for using AI technology for emotion regulation. The responses are as follows: “The facilitating conditions can help me to use AI technology quickly, and in the case of a bad emotional condition, I can get a quick emotion regulation service experience”, “I can protect my private data from disclosure during the use of AI-based emotion regulation tools, making me feel more secure” and “I would like to be able to have positive feedback during the use process and give effective interactions”. The results showed that respondents’ choice to use AI technology for emotion regulation is mainly related to functional outcomes, accounting for 76.48% of the overall reason for use data.
Usefulness (23.57% of functional outcomes) emerged as the most prominent motivator for using AI technology for emotion regulation, expressed as “AI technology is effective in regulating emotions and producing functional value”. Participants consistently stated that AI technology effectively regulated emotional states and provided tangible functional value. As Eldesouky et al. [51] highlighted, its continued use is often contingent on demonstrated usefulness and tangible emotional benefits. The second-most frequently cited factor was privacy, with 21.66% of responses emphasising the need for secure data handling and a private usage environment. Trust, accounting for 15.29%, was linked to the integrity of data protection and the perceived reliability of the product’s physical appearance (e.g., colour, material and craftsmanship) and the service experience. Beyond these leading motivations, several additional functional outcomes also influenced user attitudes. Convenience was prominent (12.74%), with respondents valuing the ease of access, fast response and low learning effort required of AI technology, particularly in emotionally stressful situations. Intelligence, cited by 12.74% of participants, referred to the perceived ability of AI technology to understand emotional cues, respond appropriately and adapt to user needs [52]. Participants expressed expectations that such systems should be capable of sensing emotional states and delivering personalised suggestions through intuitive and natural modalities such as voice prompts. This construct included the AI technology’s responsiveness, contextual awareness and decision-making logic. Value was defined as users’ perception of the benefit-to-cost ratio when engaging with AI-based emotion regulation tools. This included an evaluation of whether the emotional, cognitive or time investment was justified by the support or outcomes received [53]. High value, cited by 5.10% of the participants, was associated with experiences where AI-based tools provided meaningful emotional support, offered practical insights or led to noticeable improvements in emotional well-being. This finding echoed Hill and Updegraff [54], who showed the importance of perceived value in shaping user engagement.
In addition to functional considerations, hedonic factors played a significant role, accounting for 22.08% of the reported reasons for using AI technology for emotion regulation [55]. Respondents indicated that enjoyment of the usage experience was also a key consideration beyond functionality. Specifically, 93.75% of the hedonic responses referred to interest and fun as motivating factors. As one participant described, “I use it when I’m bored. It feels more like playing a game than using a mental health tool.” Another echoed this sentiment, saying “The animations and sounds are fun. It cheers me up even before I start doing the exercises.” These responses illustrate how emotionally engaging and enjoyable interactions enhanced users’ willingness to use AI technology [56]. Therefore, integrating affective and enjoyable elements into the user experience is essential for increasing adoption and sustained engagement.
Although cited less frequently, social influence also contributed to users’ decisions, accounting for 1.44% of responses. One participant shared “My friends recommended I try it, so I gave it a go.” Another remarked “I noticed others using these tools, which made me feel it was normal to try.” These statements were initially coded as “peer influence” and “surrounding environment influence”, which were subsequently integrated into the broader category of “social influence” during axial coding. The social influence, including shared behaviours, product visibility and conversations surrounding AI tools, could shape preferences regarding product types, pricing and service content [57]. This finding suggested that social influence can have subtle but meaningful effects on adoption behaviour. In this study, participants were young people in China, where emotion regulation, especially in relation to mental health, is often considered a private matter. In collectivist cultures, individuals are often reluctant to openly share their emotional challenges [58]. Although the interview guide included prompts about social influences, participants primarily focused on individual needs, technological attributes and personal experiences when describing their non-use of AI technology for emotion regulation. This pattern suggested that functional and hedonic considerations were salient in their decision-making processes regarding the use of AI technology for emotion regulation.

3.4. Reasons for Not Using AI Technology for Emotion Regulation

This study explored the reasons why young people choose not to use AI technology for emotion regulation. Based on the thematic analysis of the interview data, the reasons for non-use were categorised into functional barriers (55.94%), dispositional barriers (41.96%) and environmental barriers (2.1%).
Functional barriers were the most frequently cited reason. The most prominent concern was the low value, which accounted for 51.7% of the responses. Participants indicated that, if the emotional benefits of AI technology did not justify the time, effort or financial cost, they were unlikely to use it, consistent with Salim et al. [59]. Similarly, uselessness accounted for 27.50%. Young people reported that, if AI technology fails to deliver effective emotional support strategies, it becomes worthless and irrelevant to their needs. In addition, young people expressed concerns regarding the lack of trust in AI technology, inadequate functionality, insufficient emotional understanding and low accuracy in emotion recognition. One participant stated “I feel AI cannot really understand what I’m going through emotionally. It just gives generic advice that doesn’t help me much.” Another commented “I don’t feel safe sharing personal thoughts with an AI. What if the data leaks?” These reflections were coded as “emotions hard to understand” and “lack of trust”, which, through axial coding, were integrated into the category “functional barriers”. Specifically, young people expect AI technology to detect their emotional states accurately and to respond empathetically and effectively regulate those emotions. If the recognition mechanisms were flawed, young people felt that the core function of emotion regulation could not be realised, thus undermining the purpose of using AI technology.
Dispositional barriers were also significant. These included inconvenience in usage (36.67%), high learning costs (33.33%) and uninspiring or unengaging service content (30.00%). Respondents strongly preferred AI technology that is easy to learn and quick to implement, especially given the time-sensitive nature of emotional crises [60]. Respondents also emphasised the need for engaging, novel experiences, such as virtual reality, gamified interaction or personalised visual feedback. For instance, Palomba [61] demonstrated that immersive VR environments can enhance emotion regulation and build user trust in the technology.
Environmental barriers (2.1%) were associated with factors beyond the user’s immediate control and, though less frequently reported, also affected user behaviour. One participant noted “I don’t know where to find reliable apps for this. It’s not something people talk about.” Another echoed this concern: “I am unaware of available AI technology for emotion regulation due to a lack of a platform or channel in my surroundings.” These responses were coded as “lack of access to information” and “no professional platform or channel” and were integrated into the broader category of “environmental barriers” in the model. In the absence of relevant AI products, they tended to default to more accessible and familiar strategies, such as listening to music, reading, chatting with friends or browsing short videos via mobile apps [62]. Furthermore, the lack of information about AI technology for emotion regulation also limited young people’s engagement, with some participants stating they had never heard of AI technology for emotion regulation, indicating insufficient public exposure and awareness.
In addition to the above barriers, participants’ self-perceptions and social concerns also influenced their behaviour. Some respondents felt emotionally disconnected from such technologies, perceiving themselves as incompatible with AI-based emotion regulation technology [63]. Others expressed fear of social stigma, suggesting that the use of such technology may be misinterpreted as a sign of psychological problems [64]. These findings indicated that, beyond usability, self-identity and cultural attitudes play a role in shaping AI technology adoption behaviour and that future designs should consider low-profile, discreet interfaces to reduce perceived stigma and encourage broader acceptance.

3.5. Grounded Theory

Axial coding was employed to explore the relationships between these categories, constructing a preliminary conceptual framework illustrated in Figure 1. This framework is structured around five key components: causal conditions, external influences, phenomena, intervening conditions and consequences [65]. The figure illustrates the results of axial coding. The key focus of this study suggests that the central action of “use or non-use of AI technology for emotion regulation” is related to other categories and subcategories embedded in causal conditions, external influences, phenomena, intervention conditions and consequences.
In the final stage (selective coding), all categories and themes were revisited and synthesised to construct a four-layer theoretical framework based on the qualitative data (Figure 2). The model identifies three primary contextual influences on choice behaviours: personal, technological and environmental. The core or innermost layer of the underlying model is the “use or not-use of AI technology for emotion regulation”. Each surrounding layer exerts a direct or indirect influence on this central behaviour through its constructs or by interacting with adjacent layers.
The second layer of this model is the individual context, which includes attitude toward using AI technology for emotion regulation, dispositional characteristics and emotional status [54]. These factors significantly shape the likelihood of engaging with AI-based emotion regulation technologies. As supported by Aslan [66], individuals’ preferred methods of emotional support strategies are often informed by these personal evaluations. Most respondents in this study maintained a generally positive attitude toward the AI-based emotion regulation tools they currently use. They believed these tools were effective, which influenced their continued adoption. This behavioural tendency is consistent with the Theory of Planned Behaviour [67], which posits that, when individuals perceive a high level of behavioural control over a positively evaluated action, their intention to perform that action increases. Conversely, negative attitudes or scepticism about the current emotion regulation tools often reduced the perceived relevance of AI technology, potentially overlooking the advantages such technologies could offer. This resistance may result from habitual reliance on familiar strategies or a lack of trust in the emotional sensitivity of AI systems [68].
The third layer of the grounded theory model focuses on technological context. Respondents indicated that their choices were shaped by key technology attributes, including usefulness, perceived pleasure, privacy, high value and functionality. Favourable user experiences, such as ease of use and interactive feedback, were cited as central motivations for technology adoption. Interactive feedback indicates the responsiveness of the AI system, including its ability to provide timely, relevant and two-way feedback that gives users a sense of being “heard” or attended to [69]. This finding aligned with Bandura [70], who posited that individuals are more likely to engage in a behaviour if they are aware of the potential positive outcomes of AI technology. Functional and social benefits function as extrinsic incentives, driving adoption based on perceived value and social validation [71,72]. In contrast, hedonic and personally meaningful experiences serve as intrinsic motivators, encouraging sustained use through enjoyment, novelty and emotional engagement. Young people also emphasised the importance of trustworthiness, particularly about data security and the credibility of emotional feedback provided [73]. In addition to these benefits, certain product characteristics were also cited as barriers when poorly implemented, such as complex interfaces, lack of engaging content or insufficient privacy assurance. Prior studies support these findings, indicating that young users prefer emotionally engaging products with simple, intuitive operations and responsive feedback [74,75]. Product-related factors act as antecedents in shaping user attitudes and behavioural intentions within the acceptance model of AI technology for emotion regulation.
Environmental context is the fourth and outermost layer of the grounded theory model. This layer includes social influence, situational influence, peer influence and information sources. Young people’s choice behaviours towards using AI technology for emotion regulation are influenced by values, norms, social trends and surrounding references, especially family and friends. For example, if those in their social circle used AI technology for emotion regulation, they were more likely to explore or adopt it. Similarly, information and signals conveyed by mass media, including short videos, television, the Internet and other secondary sources, influence their behaviour [76]. As supported by Fan et al. [77], social feedback and broader cultural perceptions could act as either facilitators or inhibitors. Importantly, limited access to information was identified as a barrier. Some respondents noted little awareness of AI technology due to the absence of public discourse or promotional efforts in their immediate environment. On the other hand, negative social perceptions, product design flaws and individual resistance could hinder young people’s engagement in using AI technology for emotion regulation.

4. Discussion

This study successfully achieved its objective of exploring young people’s attitudes and behavioural motivations related to the use of AI technology for emotion regulation. Through qualitative interviews with 40 participants and grounded theory analysis, the research developed a theoretical model that explained the factors influencing young people’s use or non-use of AI technology for emotion regulation. Specifically, the grounded theory model demonstrated causal conditions, external influences, intervention conditions and consequences of using or not using AI technology for emotion regulation. The model suggests that choice behaviours are influenced by factors in three primary contexts: personal, technological and environmental. Personal factors include attitudes, individual perceptions and trust in AI technology for emotion regulation. Technological factors include usefulness, perceived fun, privacy, high value and emotional processing. Environmental factors encompass social influence, environmental influence, convenience and information sources. The findings of this study aligned closely with constructs in established technology acceptance models. For example, participants’ emphasis on usefulness and convenience mirrors the perceived usefulness and ease of use constructs in the technology acceptance model [78]. Similarly, the lack of trust and privacy factors, akin to facilitating conditions and perceived risk, are also considered in the Unified Theory of Acceptance and Use of Technology extensions [79]. These parallels suggest that the adoption of AI technology for emotion regulation amongst young people can be understood within these established theoretical frameworks. The theoretical contributions of this study to the related literature included (1) providing qualitative data on young people’s behavioural choices in using AI technology for emotion regulation, (2) exploring young people’s attitudes toward AI technology for emotion regulation, (3) clarifying the reasons for using or not using AI technology for emotion regulation and (4) building a theory based on the qualitative data explaining the degree of young people’s acceptance of AI technology for emotion regulation.
Practically, the findings offer important insights for designing and developing AI technology for emotion regulation. First, the practicality of AI technology should be improved, because young people expressed a desire to experience emotion regulation. Second, AI technology for emotion regulation conveys perceived fun [80]. The results showed that young people are willing to experience immersive and interesting AI technology. Third, privacy is a requirement for young people in AI technology, and young people indicated that they do not want their privacy to be disclosed after experiencing the products. Furthermore, it is important to consider inclusivity in the design of AI technology for emotion regulation. AI technology for emotion regulation should accommodate the diverse abilities, sensory profiles and interaction preferences of young people, including those with neurodevelopmental differences [81]. Addressing the accessibility and inclusivity of AI technology can help ensure that it provides meaningful support to a broad population of youth navigating emotional challenges. Lastly, dealing with emotions is a requirement of young people for the functionality of AI technology for emotion regulation, and dealing with their emotions through a real-life experience helps them to obtain relief and catharsis. The study also highlighted the demand for accessible, high-quality emotional support services tailored to the preferences and expectations of young people.
Despite its contributions, the study had several limitations. First, the interrelationships and relative weights amongst the identified categories remain underexplored. Second, the theoretical and practical significance of each factor has not been quantitatively validated. Third, as with all grounded theory studies, the proposed model is interpretive and may vary depending on the researchers’ perspectives [82]. Future research should adopt quantitative or mixed-method approaches to test the robustness of the model, measure the factor influence and explore the generalisability across broader populations. Finally, while this study provided in-depth qualitative insights, the absence of standardised attitude measures represents a methodological limitation. Future research could strengthen the findings by incorporating psychometrically validated instruments to triangulate qualitative data with quantitative metrics.

5. Conclusions

This study aimed to identify young people’s attitudes towards the use of AI technology for emotion regulation and to analyse the factors that influence their decision to use or not use such technologies. The findings revealed that personal, technological and environmental factors jointly shape young people’s behaviours, with perceived usefulness, trust, privacy and convenience emerging as key motivators, while concerns about emotional connection, value and complexity acted as barriers. These insights provided initial guidance for the design of AI technology for emotion regulation that is inclusive, accessible and aligned with the needs of young users. Future research with larger and more diverse samples is needed to validate and extend the theoretical model proposed in this study.

Author Contributions

J.W., conceptualisation, formal analysis, investigation, methodology and writing—original draft; H.T., conceptualisation, data curation, formal analysis, investigation, methodology and writing—original draft; S.-S.M., conceptualisation, funding acquisition, methodology, writing—original draft and writing—review and editing; Y.C., conceptualisation, investigation, methodology, resources and writing—original draft; S.Z., conceptualisation, investigation, methodology, project administration, resources, supervision, validation, writing—original draft and writing—review and editing; H.-S.C., conceptualisation, project administration, resources, supervision and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (72301110), Philosophy and Social Science Planning Project of Guangdong Province of China (GD25YYS35), the Guangzhou Municipal Science and Technology Bureau (2024A04J2279) and the Fundamental Research Funds for the Central Universities (QNMS202418).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the South China University of Technology (code: 20230015, 5 September 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIartificial intelligence

References

  1. Thapar, A.; Eyre, O.; Patel, V.; Brent, D. Depression in young people. Lancet 2022, 400, 617–631. [Google Scholar] [CrossRef] [PubMed]
  2. Mahfoud, D.; Pardini, S.; Mróz, M.; Hallit, S.; Obeid, S.; Akel, M.; Novara, C.; Brytek-Matera, A. Profiling orthorexia nervosa in young adults: The role of obsessive behaviour, perfectionism, and self-esteem. J. Eat. Disord. 2023, 11, 188. [Google Scholar] [CrossRef] [PubMed]
  3. Gómez-Galán, J.; Lázaro-Pérez, C.; Martínez-López, J.Á. Exploratory study on video game addiction of college students in a pandemic scenario. J. New Approaches Educ. Res. 2021, 10, 330–346. [Google Scholar] [CrossRef]
  4. Carballo-Marquez, A.; Ampatzoglou, A.; Rojas-Rincón, J.; Garcia-Casanovas, A.; Garolera, M.; Fernández-Capo, M.; Porras-Garcia, B. Improving Emotion Regulation, Internalizing Symptoms and Cognitive Functions in Adolescents at Risk of Executive Dysfunction—A Controlled Pilot VR Study. Appl. Sci. 2025, 15, 1223. [Google Scholar] [CrossRef]
  5. Tabares, M.T.; Álvarez, C.V.; Salcedo, J.B.; Rendón, S.M. Anxiety in young people: Analysis from a machine learning model. Acta Psychol. 2024, 248, 104410. [Google Scholar] [CrossRef]
  6. Telzer, E.H.; Kwon, S.-J.; Jorgensen, N.A. Neurobiological Development in Adolescence and Early Adulthood: Implications for Positive Youth Adjustment; American Psychological Association: Washington, DC, USA, 2023. [Google Scholar]
  7. Criss, M.M.; Cui, L.; Wood, E.E.; Morris, A.S. Associations between emotion regulation and adolescent adjustment difficulties: Moderating effects of parents and peers. J. Child Fam. Stud. 2021, 30, 1979–1989. [Google Scholar] [CrossRef]
  8. Edwards, E.R.; Wupperman, P. Research on emotional schemas: A review of findings and challenges. Clin. Psychol. 2019, 23, 3–14. [Google Scholar] [CrossRef]
  9. Faramarzi, A.; Sharini, H.; Shanbehzadeh, M.; Pour, M.Y.; Fooladi, M.; Jalalvandi, M.; Amiri, S.; Kazemi-Arpanahi, H. Anhedonia symptoms: The assessment of brain functional mechanism following music stimuli using functional magnetic resonance imaging. Psychiatry Res. Neuroimaging 2022, 326, 111532. [Google Scholar] [CrossRef]
  10. Du, Y.; Hua, L.; Tian, S.; Dai, Z.; Xia, Y.; Zhao, S.; Zou, H.; Wang, X.; Sun, H.; Zhou, H. Altered beta band spatial-temporal interactions during negative emotional processing in major depressive disorder: An MEG study. J. Affect. Disord. 2023, 338, 254–261. [Google Scholar] [CrossRef]
  11. Khedr, M.A.; Alharbi, T.A.F.; Alkaram, A.A.; Hussein, R.M. Impact of resilience-based intervention on emotional regulation, grit and life satisfaction among female Egyptian and Saudi nursing students: A randomized controlled trial. Nurse Educ. Pract. 2023, 73, 103830. [Google Scholar] [CrossRef]
  12. Klausner, E.A.; Rose, T.M.; Gundrum, D.A.; McMorris, T.E.; Lang, L.A.; Shan, G.; Chu, A. Evaluating the Effects of a Mindfulness Mobile Application on Student Pharmacists’ Stress, Burnout, and Mindfulness. Am. J. Pharm. Educ. 2023, 87, 100259. [Google Scholar] [CrossRef]
  13. Pavlopoulos, A.; Rachiotis, T.; Maglogiannis, I. An Overview of Tools and Technologies for Anxiety and Depression Management Using AI. Appl. Sci. 2024, 14, 9068. [Google Scholar] [CrossRef]
  14. Moore, P.V. Jerry Kaplan artificial intelligence: What everyone needs to know. Organ. Stud. 2019, 40, 466–470. [Google Scholar] [CrossRef]
  15. Thakkar, A.; Gupta, A.; De Sousa, A. Artificial intelligence in positive mental health: A narrative review. Front. Digit. Health 2024, 6, 1280235. [Google Scholar] [CrossRef]
  16. Nguyen, D.; Nguyen, M.T.; Yamada, K. Electroencephalogram Based Emotion Recognition Using Hybrid Intelligent Method and Discrete Wavelet Transform. Appl. Sci. 2025, 15, 2328. [Google Scholar] [CrossRef]
  17. Alheeti, A.A.M.; Salih, M.M.M.; Mohammed, A.H.; Hamood, M.A.; Khudhair, N.R.; Shakir, A.T. Emotion Recognition of Humans using modern technology of AI: A Survey. In Proceedings of the 2023 7th International Symposium on Innovative Approaches in Smart Technologies (ISAS), Istanbul, Turkiye, 23–25 November 2023; pp. 1–10. [Google Scholar]
  18. Singh, G.V.; Firdaus, M.; Chauhan, D.S.; Ekbal, A.; Bhattacharyya, P. Zero-shot multitask intent and emotion prediction from multimodal data: A benchmark study. Neurocomputing 2024, 569, 127128. [Google Scholar] [CrossRef]
  19. Yeh, P.-L.; Kuo, W.-C.; Tseng, B.-L.; Sung, Y.-H. Does the AI-driven Chatbot Work? Effectiveness of the Woebot app in reducing anxiety and depression in group counseling courses and student acceptance of technological aids. Curr. Psychol. 2025, 44, 8133–8145. [Google Scholar] [CrossRef]
  20. Chalabianloo, N.; Can, Y.S.; Umair, M.; Sas, C.; Ersoy, C. Application level performance evaluation of wearable devices for stress classification with explainable AI. Pervasive Mob. Comput. 2022, 87, 101703. [Google Scholar] [CrossRef]
  21. Conderman, G.; Van Laarhoven, T.; Johnson, J.; Liberty, L. Wearable technologies for anxious adolescents. Clear. House A J. Educ. Strateg. Issues Ideas 2020, 94, 1–7. [Google Scholar] [CrossRef]
  22. Song, Y.; Wu, K.; Ding, J. Developing an immersive game-based learning platform with generative artificial intelligence and virtual reality technologies–“LearningverseVR”. Comput. Educ. X Real. 2024, 4, 100069. [Google Scholar] [CrossRef]
  23. Man, S.S.; Li, X.; Lin, X.J.; Lee, Y.-C.; Chan, A.H.S. Assessing the Effectiveness of Virtual Reality Interventions on Anxiety, Stress, and Negative Emotions in College Students: A Meta-Analysis of Randomized Controlled Trials. Int. J. Hum.–Comput. Interact. 2024, 1–17. [Google Scholar] [CrossRef]
  24. Wang, E.; Chang, W.-L.; Shen, J.; Bian, Q.; Huang, P.; Lai, X.; Chang, T.-Y.; Shaoying, H.; Ziyue, Z.; Wang, Z. Effect of artificial intelligence on one-to-one emotional regulation and psychological intervention system of middle school students. Int. J. Neuropsychopharmacol. 2022, 25, A62–A63. [Google Scholar] [CrossRef]
  25. Chakravarthi, B.; Ng, S.-C.; Ezilarasan, M.R.; Leung, M.-F. EEG-based emotion recognition using hybrid CNN and LSTM classification. Front. Comput. Neurosci. 2022, 16, 1019776. [Google Scholar] [CrossRef] [PubMed]
  26. Draucker, C.B.; Martsolf, D.S.; Ross, R.; Rusk, T.B. Theoretical sampling and category development in grounded theory. Qual. Health Res. 2007, 17, 1137–1148. [Google Scholar] [CrossRef]
  27. Upjohn, M.; Attwood, G.; Lerotholi, T.; Pfeiffer, D.; Verheyen, K. Quantitative versus qualitative approaches: A comparison of two research methods applied to identification of key health issues for working horses in Lesotho. Prev. Vet. Med. 2013, 108, 313–320. [Google Scholar] [CrossRef]
  28. Bogdan, R.; Biklen, S.K. Qualitative Research for Education; Allyn & Bacon: Boston, MA, USA, 1997; Volume 368. [Google Scholar]
  29. Bolderston, A. Conducting a research interview. J. Med. Imaging Radiat. Sci. 2012, 43, 66–76. [Google Scholar] [CrossRef]
  30. Wong, T.K.M.; Man, S.S.; Chan, A.H.S. Critical factors for the use or non-use of personal protective equipment amongst construction workers. Saf. Sci. 2020, 126, 104663. [Google Scholar] [CrossRef]
  31. Man, S.S.; Chan, A.H.S.; Wong, H.M. Risk-taking behaviors of Hong Kong construction workers—A thematic study. Saf. Sci. 2017, 98, 25–36. [Google Scholar] [CrossRef]
  32. Karemaker, M.; Ten Hoor, G.A.; Hagen, R.R.; van Schie, C.H.; Boersma, K.; Ruiter, R.A. Elderly about home fire safety: A qualitative study into home fire safety knowledge and behaviour. Fire Saf. J. 2021, 124, 103391. [Google Scholar] [CrossRef]
  33. Lee, W.K.H.; Man, S.S.; Chan, A.H.S. Cogeneration System Acceptance in the Hotel Industry: A Qualitative Study. J. Hosp. Tour. Manag. 2022, 51, 339–345. [Google Scholar] [CrossRef]
  34. James, R.; Hodson, K.; Mantzourani, E.; Davies, D. Exploring the implementation of Discharge Medicines Review referrals by hospital pharmacy professionals: A qualitative study using the consolidated framework for implementation research. Res. Soc. Adm. Pharm. 2023, 19, 1558–1569. [Google Scholar] [CrossRef]
  35. Ahlberg, M.; Berterö, C.; Ågren, S. Family functioning of families experiencing intensive care and the specific impact of the COVID-19 pandemic: A grounded theory study. Intensive Crit. Care Nurs. 2023, 76, 103397. [Google Scholar] [CrossRef] [PubMed]
  36. Ng, J.Y.; Usman, M.S.; Gilotra, K.; Guyatt, G.H.; Levine, M.A.; Busse, J.W. Attitudes towards medical cannabis among Ontario family physicians: A qualitative interview study. Eur. J. Integr. Med. 2021, 48, 101952. [Google Scholar] [CrossRef]
  37. Rosenberg, L.; Kottorp, A.; Nygård, L. Readiness for technology use with people with dementia: The perspectives of significant others. J. Appl. Gerontol. 2012, 31, 510–530. [Google Scholar] [CrossRef]
  38. Glaser, B.; Strauss, A. Discovery of Grounded Theory: Strategies for Qualitative Research; Routledge: Abingdon, UK, 2017. [Google Scholar]
  39. Sun, Y.; Li, Z.; Liu, Z. Usability Study of Museum Website Based on Analytic Hierarchy Process: A Case of Foshan Museum Website. In Proceedings of the International Conference on Human-Computer Interaction, Virtual Event, 26 June–1 July 2022; pp. 504–525. [Google Scholar]
  40. Hennink, M.; Kaiser, B.N. Sample sizes for saturation in qualitative research: A systematic review of empirical tests. Soc. Sci. Med. 2022, 292, 114523. [Google Scholar] [CrossRef]
  41. Mason, M. Sample size and saturation in PhD studies using qualitative interviews. Forum Qual. Soc. Res. Soz. 2010, 11, 8. [Google Scholar]
  42. Tong, A.; Sainsbury, P.; Craig, J. Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. Int. J. Qual. Health Care 2007, 19, 349–357. [Google Scholar] [CrossRef]
  43. Corbin, J.; Strauss, A. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory; Sage Publications: Thousand Oaks, CA, USA, 2014. [Google Scholar]
  44. Fassinger, R.E. Paradigms, praxis, problems, and promise: Grounded theory in counseling psychology research. J. Couns. Psychol. 2005, 52, 156–166. [Google Scholar] [CrossRef]
  45. Strauss, A.; Corbin, J. Basics of Qualitative Research; Sage: Newbury Park, CA, USA, 1990; Volume 15. [Google Scholar]
  46. Kelle, U. The development of categories: Different approaches in grounded theory. In The Sage Handbook of Grounded Theory; Sage Publications: Thousand Oaks, CA, USA, 2007; pp. 191–213. [Google Scholar]
  47. Angrosino, M. Doing Ethnographic and Observational Research; Sage Publications: Thousand Oaks, CA, USA, 2007. [Google Scholar]
  48. Shi, L. The integration of advanced AI-enabled emotion detection and adaptive learning systems for improved emotional regulation. J. Educ. Comput. Res. 2025, 63, 173–201. [Google Scholar] [CrossRef]
  49. Henkel, A.P.; Bromuri, S.; Iren, D.; Urovi, V. Half human, half machine–augmenting service employees with AI for interpersonal emotion regulation. J. Serv. Manag. 2020, 31, 247–265. [Google Scholar] [CrossRef]
  50. Carney, J.; Robertson, C. Five studies evaluating the impact on mental health and mood of recalling, reading, and discussing fiction. PLoS ONE 2022, 17, e0266323. [Google Scholar] [CrossRef]
  51. Eldesouky, L.; Ellis, K.; Goodman, F.; Khadr, Z. Daily emotion regulation and emotional well-being: A replication and extension in Egypt. Curr. Res. Ecol. Soc. Psychol. 2023, 4, 100106. [Google Scholar] [CrossRef]
  52. Afroogh, S.; Akbari, A.; Malone, E.; Kargar, M.; Alambeigi, H. Trust in AI: Progress, challenges, and future directions. Humanit. Soc. Sci. Commun. 2024, 11, 1568. [Google Scholar] [CrossRef]
  53. Mohr, D.C.; Zhang, M.; Schueller, S.M. Personal sensing: Understanding mental health using ubiquitous sensors and machine learning. Annu. Rev. Clin. Psychol. 2017, 13, 23–47. [Google Scholar] [CrossRef] [PubMed]
  54. Hill, C.L.; Updegraff, J.A. Mindfulness and its relationship to emotional regulation. Emotion 2012, 12, 81. [Google Scholar] [CrossRef]
  55. Balleyer, A.H.; Fennis, B.M. Hedonic consumption in times of stress: Reaping the emotional benefits without the self-regulatory cost. Front. Psychol. 2022, 13, 685552. [Google Scholar] [CrossRef]
  56. Hamari, J.; Koivisto, J.; Sarsa, H. Does gamification work?—A literature review of empirical studies on gamification. In Proceedings of the 2014 47th Hawaii International Conference on System Sciences, Waikoloa, HI, USA, 6–9 January 2014; pp. 3025–3034. [Google Scholar]
  57. Roberts, M.E.; Clarkson, J.J.; Cummings, E.L.; Ragsdale, C.M. Facilitating emotional regulation: The interactive effect of resource availability and reward processing. J. Exp. Soc. Psychol. 2017, 69, 65–70. [Google Scholar] [CrossRef]
  58. Chen, S.X.; Cheung, F.M.; Bond, M.H.; Leung, J.-P. Decomposing the construct of ambivalence over emotional expression in a Chinese cultural context. Eur. J. Personal. 2005, 19, 185–204. [Google Scholar] [CrossRef]
  59. Salim, T.A.; El Barachi, M.; Mohamed, A.A.D.; Halstead, S.; Babreak, N. The mediator and moderator roles of perceived cost on the relationship between organizational readiness and the intention to adopt blockchain technology. Technol. Soc. 2022, 71, 102108. [Google Scholar] [CrossRef]
  60. Green, G. Analysis of the mediating effect of resistance to change, perceived ease of use, and behavioral intention to use technology-based learning among younger and older nursing students. J. Prof. Nurs. 2024, 50, 66–72. [Google Scholar] [CrossRef]
  61. Palomba, A. Virtual perceived emotional intelligence: How high brand loyalty video game players evaluate their own video game play experiences to repair or regulate emotions. Comput. Hum. Behav. 2018, 85, 34–42. [Google Scholar] [CrossRef]
  62. Ko, K.S.; Lee, W.K. A preliminary study using a mobile app as a dance/movement therapy intervention to reduce anxiety and enhance the mindfulness of adolescents in South Korea. Arts Psychother. 2023, 85, 102062. [Google Scholar] [CrossRef]
  63. Yu, W.-J.; Hung, S.-Y.; Yu, A.P.-I.; Hung, Y.-L. Understanding consumers’ continuance intention of social shopping and social media participation: The perspective of friends on social media. Inf. Manag. 2024, 61, 103808. [Google Scholar] [CrossRef]
  64. Cheshin, A.; Amit, A.; Van Kleef, G.A. The interpersonal effects of emotion intensity in customer service: Perceived appropriateness and authenticity of attendants’ emotional displays shape customer trust and satisfaction. Organ. Behav. Hum. Decis. Process. 2018, 144, 97–111. [Google Scholar] [CrossRef]
  65. Thompson, C.B.; Walker, B.L. Basics of research (part 12): Qualitative research. Air Med. J. 1998, 17, 65–70. [Google Scholar] [CrossRef]
  66. Aslan, H. The influence of halal awareness, halal certificate, subjective norms, perceived behavioral control, attitude and trust on purchase intention of culinary products among Muslim costumers in Turkey. Int. J. Gastron. Food Sci. 2023, 32, 100726. [Google Scholar] [CrossRef]
  67. Ajzen, I. From intentions to actions: A theory of planned behavior. In Action Control: From Cognition to Behavior; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
  68. MacNamara, A.; Joyner, K.; Klawohn, J. Event-related potential studies of emotion regulation: A review of recent progress and future directions. Int. J. Psychophysiol. 2022, 176, 73–88. [Google Scholar] [CrossRef]
  69. Lisetti, C.; Amini, R.; Yasavur, U.; Rishe, N. I can help you change! an empathic virtual agent delivers behavior change health interventions. ACM Trans. Manag. Inf. Syst. 2013, 4, 1–28. [Google Scholar] [CrossRef]
  70. Bandura, A. Social Foundations of Thought and Action: A Social Cognitive Theory; Prentice-Hall: Englewoods Cliffs, NJ, USA, 1986. [Google Scholar]
  71. Man, S.S.; Wang, J.; Chan, A.H.S.; Liu, L. Ageing in the digital age: What drives virtual reality technology adoption among older adults? Ergonomics 2025, 1–15. [Google Scholar] [CrossRef]
  72. Man, S.S.; Ding, M.; Li, X.; Chan, A.H.S.; Zhang, T. Acceptance of highly automated vehicles: The role of facilitating condition, technology anxiety, social influence and trust. Int. J. Hum.–Comput. Interact. 2025, 41, 3684–3695. [Google Scholar] [CrossRef]
  73. Zhang, T.; Li, J.; Qiao, L.; Zhang, Y.; Li, W.; Man, S.S. Evaluation of drivers’ mental model, trust, and reliance toward level 2 automated vehicles. Int. J. Hum.–Comput. Interact. 2025, 41, 3696–3707. [Google Scholar] [CrossRef]
  74. Mohsin, A.; Lengler, J.; Chaiya, P. Does travel interest mediate between motives and intention to travel? A case of young Asian travellers. J. Hosp. Tour. Manag. 2017, 31, 36–44. [Google Scholar] [CrossRef]
  75. Yoon, J.; Pohlmeyer, A.E.; Desmet, P.M.; Kim, C. Designing for positive emotions: Issues and emerging research directions. Des. J. 2020, 24, 167–187. [Google Scholar] [CrossRef]
  76. Alam, M.N.; Ogiemwonyi, O.; Alshareef, R.; Alsolamy, M.; Mat, N.; Azizan, N.A. Do social media influence altruistic and egoistic motivation and green purchase intention towards green products? An experimental investigation. Clean. Eng. Technol. 2023, 15, 100669. [Google Scholar] [CrossRef]
  77. Fan, L.; Wang, Y.; Mou, J. Enjoy to read and enjoy to shop: An investigation on the impact of product information presentation on purchase intention in digital content marketing. J. Retail. Consum. Serv. 2024, 76, 103594. [Google Scholar] [CrossRef]
  78. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  79. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  80. Man, S.S.; Guo, Y.; Chan, A.H.S.; Zhuang, H. Acceptance of online mapping technology among older adults: Technology acceptance model with facilitating condition, compatibility, and self-satisfaction. ISPRS Int. J. Geo-Inf. 2022, 11, 558. [Google Scholar] [CrossRef]
  81. Pergantis, P.; Bamicha, V.; Doulou, A.; Christou, A.I.; Bardis, N.; Skianis, C.; Drigas, A. Assistive and Emerging Technologies to Detect and Reduce Neurophysiological Stress and Anxiety in Children and Adolescents with Autism and Sensory Processing Disorders: A Systematic Review. Technologies 2025, 13, 144. [Google Scholar] [CrossRef]
  82. Cutcliffe, J.R. Methodological issues in grounded theory. J. Adv. Nurs. 2000, 31, 1476–1484. [Google Scholar] [CrossRef]
Figure 1. Coding results.
Figure 1. Coding results.
Applsci 15 07476 g001
Figure 2. Selective coding results.
Figure 2. Selective coding results.
Applsci 15 07476 g002
Table 1. The coding scheme and proportion of the code (n = 584).
Table 1. The coding scheme and proportion of the code (n = 584).
CategoriesSubcategoriesDefinitionPercentage
Attitudes toward using AI technology for emotion regulation (16.25%) Positive attitudes (47.24%)A person’s favourable evaluation of using AI technology for emotion regulationRegulating emotional states (44.15%)
Functional values (38.96%)
Raising emotional awareness (12.99%)
Addressing the source problem (3.90%)
Neutral attitudes (12.88%)A person’s neither favourable nor unfavourable evaluation of using AI technology for emotion regulationUsage environment (52.38%)
No disadvantages (19.04%)
Low positive impact (14.29%)
Single form (14.29%)
Negative attitudes (39.88%)A person’s unfavourable evaluation of using AI technology for emotion regulationNegative effects (55.39%)
Unresolved issues (15.38%)
Inconvenience (12.31%)
Poor quality-price ratio (7.69%)
Complexity (4.62%)
Addiction (3.07%)
Security and privacy risks (1.54%)
Reasons for the use of AI technology for emotion regulation (56.07%)Functional outcomes (76.48%)The extent to which using AI technology for emotion regulation is perceived to be
instrumental in achieving
valued outcomes
Usefulness (23.57%)
Privacy (21.66%)
Trust (15.29%)
Convenience (12.74%)
Intelligent (12.74%)
High value (5.1%)
Accessibility (4.44%)
Functionality (2.55%)
Interactive feedback (1.91%)
Hedonic outcomes (22.08%)The individual’s level of curiosity during the interaction and the perception that the interaction is intrinsically enjoyableInterest and fun (93.75%)
Beautiful interface (4.17%)
Emotional experience (2.08%)
Social influence (1.44%)The extent to which social values and members of a social network influence the user behaviourPeer influence (66.67%)
Surrounding environment influence (33.33%)
Reasons for the non-use of AI technology for emotion regulation (27.68%)Functional barriers (55.94%)The extent to which the non-use of AI technology for emotion regulation is perceived to be instrumental in achieving valued outcomesLow value (51.76%)
Uselessness (27.50%)
Lack of trust (16.25%)
Inadequate functionality (1.99%)
Emotions hard to understand (1.25%)
Low accuracy (1.25%)
Dispositional barriers (41.96%)Personal factors associated with individuals’ attitudes and self-perceptions about oneself as a userInconvenience (36.67%)
Difficulty with learning (33.33%)
No interest (30%)
Environmental barriers (2.1%)Factors that are beyond one’s control and are related to the individual’s life situation or environment at a particular timeNo relevant products (56.69%)
No professional platform or channel (30.75%)
Lack of information (12.56%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Tang, H.; Man, S.-S.; Chen, Y.; Zhou, S.; Chan, H.-S. Critical Factors in Young People’s Use and Non-Use of AI Technology for Emotion Regulation: A Pilot Study. Appl. Sci. 2025, 15, 7476. https://doi.org/10.3390/app15137476

AMA Style

Wang J, Tang H, Man S-S, Chen Y, Zhou S, Chan H-S. Critical Factors in Young People’s Use and Non-Use of AI Technology for Emotion Regulation: A Pilot Study. Applied Sciences. 2025; 15(13):7476. https://doi.org/10.3390/app15137476

Chicago/Turabian Style

Wang, Junyu, Hongying Tang, Siu-Shing Man, Yingwei Chen, Shuzhang Zhou, and Hoi-Shou (Alan) Chan. 2025. "Critical Factors in Young People’s Use and Non-Use of AI Technology for Emotion Regulation: A Pilot Study" Applied Sciences 15, no. 13: 7476. https://doi.org/10.3390/app15137476

APA Style

Wang, J., Tang, H., Man, S.-S., Chen, Y., Zhou, S., & Chan, H.-S. (2025). Critical Factors in Young People’s Use and Non-Use of AI Technology for Emotion Regulation: A Pilot Study. Applied Sciences, 15(13), 7476. https://doi.org/10.3390/app15137476

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop