1. Introduction
Beginning at an early age, especially among Generation Z, a growing number of people, now estimated at around 5.24 billion, actively use social media. On average, adolescents spend about 141 min per day on these platforms worldwide [
1,
2]. Moreover, with Facebook as the most widely used network, followed by YouTube, WhatsApp, Instagram, and TikTok [
2]. However, this rapid digital immersion has brought significant downsides, including excessive screen time [
3], increased sexting incidents [
4], increased diagnosis of health conditions [
4], and rising instances of sleep deprivations [
5]. From a developmental perspective, adolescents are particularly vulnerable to the persuasive design and targeted advertising commonly found on social medial platforms. Consequently, the
digital pandemic fostered a harmful psychosocial environment characterized by cyberbullying, misinformation, and declining mental health among young people (
Figure 1).
This evolving crisis demands urgent, multifaceted interventions. Central to the concern is the role of Generative Artificial Intelligence (AI)—including large language models (LLMs)—in shaping the digital ecosystem. While LLMs focus on generating human-like text, generative AI more broadly creates data across various media formats such as images, audio, video, and code. The rising influence of these technologies in digital spaces bring out the need to assess their potential psychological impact on adolescents. Addressing the
digital pandemic therefore requires not only regulating social media content but also critically evaluating how generative AI shapes online experiences and contributes to youth mental health challenges [
6].
2. Methodology
PubMed was used to search for studies examining the potential mechanisms that may be driving the newly introduced concept, digital pandemic, using keywords “screen media use” in combination with “adolescents”, “artificial intelligence algorithms”, “behavioral conditioning”, “cyberbullying”, “echo chambers”, “filter bubbles”, “internalizing/externalizing”, “internet addiction”, “mental health”, “online predators”, “screen time”, “suicide”, “suicide ideation”, and “self-harm”. All relevant articles, including experimental research, cross-sectional and longitudinal epidemiological studies, and systematic reviews, were reviewed and synthesized in this integrative review.
3. The Urgent Challenge: A Digital Pandemic in Youth Mental Health
Phone-centered lifestyles and social media platforms, driven by attention-optimizing AI algorithms, often exploit psychological tendencies by amplifying negative emotions such as hatred, fear, jealousy, and discrimination [
6]. This dynamic fosters a toxic digital environment that can profoundly impact the physical, psychological, and social well-being of youth, whose mental and physical developments are still in progress. Additionally, cyberbullying has emerged as a widespread issue, enabled by the anonymity of online interactions, and has been linked to higher incidences of depression, anxiety, and even suicidal thoughts [
6]. For instance, cyberbullying has been associated with symptoms of post-traumatic stress disorder (PTSD) among adolescents in the U.K. [
7].
Excessive screen time and the addictive nature of social media further contribute to disruptions in sleep, reduced physical activity and declining overall mental and physical health [
8]. This concerning trend mirrors the dynamics of a
digital pandemic, where harmful mechanisms spread like a viral outbreak, affecting young minds and bodies. Just as pandemics require a comprehensive response, tackling this digital crisis requires a coordinated effort from all stakeholders to mitigate its widespread impacts.
4. Mechanisms of the Digital Pandemic
The
digital pandemic can be propagated through three key components: AI algorithm, malicious users, and users with undetected mental health issues. First of all, AI algorithm may drive the
digital pandemic through mechanisms such as emotion triggering, the creation of filter bubbles and echo chambers, the amplification of negativity, and behavioral conditioning. Secondly, malicious users contributed to the
digital pandemic by promoting cyberbullying, spreading harmful ideologies, and engaging in abuse or exploitation. Lastly, users with undetected mental health issues may unintentionally contribute to the
digital pandemic via unfiltered and unconscious expressions, emotional manipulation and the amplification of negative emotions (
Figure 2).
4.1. AI Algorithm
AI algorithm may contribute to the digital pandemic by triggering emotions in the users, amplifying negativity of harmful contents on social media, and reinforcing behavioral conditioning. This, in turn, can foster addictive behaviors through the creation of echo chambers and filter bubbles.
4.1.1. Emotion Triggering
AI algorithms often prioritize content that provokes strong emotional reactions, especially negative emotions like anger or fear. This can be particularly harmful to adolescents who are already dealing with mental health challenges or emotional instability, as it may intensify their negative emotional responses. A study involving 1145 participants mainly from U.S. and U.K. examined relationship between information-seeking behavior and mental health, and showed that browsing negative information was linked to poor mental health and mood [
9]. Moreover, this relationship is both causal and bidirectional, which creates a loop that would perpetuate mental health challenges [
9]. In addition, recent research has shown that hourly screen time contributes to increased depressive symptoms in adolescents. This may be due to interference with problem-focused coping strategies and emotional regulation, possibly caused by displacement effects and creation of echo chambers [
10].
4.1.2. Filter Bubbles and Echo Chambers
AI algorithms can also contribute to the
digital pandemic by creating “filter bubbles” and “echo chambers”. While personalized internet experiences are often viewed as a positive development, AI can reinforce existing beliefs and restrict exposure to differing viewpoints. Filter bubbles arise when algorithms use user data to deliver tailored content, unintentionally isolating individuals from diverse perspectives without their awareness [
11,
12]. Filter bubbles create several problems: (1) people become isolated in their personalized information bubble, (2) since these bubbles are hidden, users are often unaware that their data is being collected and may mistakenly think the information they see is unbiased, and (3) users do not actively choose their bubbles; instead, they are passively placed in them by algorithms [
12]. For example, passive use of social media (such as scrolling and browsing) may engage the individual in filter bubbles, which has been associated with heightened anxiety and depression. On the other hand, active use (such as engaging, posting, chatting) is linked to more positive outcomes [
13].
Echo chambers occur when individuals are repeatedly exposed to the same types of information that align with their existing beliefs [
14], without being exposed to opposing viewpoints [
15]. This can reinforce personal attitudes and lead to group polarization [
16]. Research shows that personalized content can limit exposure to differing perspectives. For example, people tend to select news articles and websites that align with their existing views or political affiliations [
17,
18]. Unlike filter bubbles, where content is passively personalized by algorithms, echo chambers are actively created by individuals choosing content that matches their views [
11]. The internet, especially social media, provides an ideal environment for echo chambers by connecting people worldwide. However, users remain vulnerable to the effects of both filter bubbles and echo chambers.
4.1.3. Amplification of Negativity
AI algorithm may also contribute to
digital pandemic via amplification of negativity. In the virtual world, harmful content often appears more prominently than in the real world because there are no standardized regulations for censorship or removal. Social platforms such as TikTok have been shown to negatively influence the body satisfaction of adolescents [
19]. A study comparing U.S. undergraduate students in 2015 and 2022 found that those in 2022 reported higher levels of body image dissatisfaction, increased engagement in vomiting and laxative use, and more time spent on image-based social media platforms like Snapchat, TikTok, and YouTube [
20]. The study emphasized that body image issues were linked more to the type of content consumed than to the total time spent online [
20]. In addition, although body positivity content promoting self-love has become popular on TikTok, it may unintentionally reinforce unrealistic beauty standards [
21]. In contrast, the concept of body neutrality, which emphasizes body function rather than appearance, offers a more inclusive and supportive approach to promote body acceptance, fostering a safer online space for adolescents [
22].
The World Economic Forum also highlights the threat of digital misinformation, noting that social media has transformed information creation and consumption from a journalist-mediated process into a decentralized, user-driven one [
23]. This shift promotes cognitive biases, such as confirmation bias and contributes to the formation of echo chambers, where harmful content can spread unchecked, further polarizing opinions and intensifying negative effects [
24,
25].
4.1.4. Behavioral Conditioning
Behavioral conditioning is another mechanism utilized by AI algorithm in promoting
digital pandemic. Intermittent reinforcement techniques, like notifications and likes on social media, can foster addictive behaviors. The Incentive Sensitization Theory (IST) suggests that cues related to addiction significantly influence such addictive behaviors [
26], although this has been insufficiently studied in the context of social media. A study of 1436 participants found that “wanting” (the desire for social media engagement) was linked to more frequent use and problematic behavior, while “liking” showed weaker and inconsistent correlations [
26]. Social-communication features on platforms like Facebook were found to contribute most to addiction, with compulsive use driving behavior more than the experience of positive emotions [
26]. Social media users often seek validation through likes and comments [
27], activating the dopaminergic system and reinforcing addictive behaviors [
28]. Research has also found that problematic social media use is associated with younger age, mental distress, and behavioral addictions like gaming and gambling in Swedish general population (
n = 2118) [
29]. Longitudinal studies suggest that adolescents with problematic social media use are more likely to experience attention and impulsivity issues [
30,
31], depressive feelings [
32], and lower academic achievement and life satisfaction [
33]. They are also more sensitive to peer influence and social feedback [
34].
4.2. Malicious Users
In addition to AI algorithms, malicious users contribute to the
digital pandemic through cyberbullying, promoting harmful ideologies, and engaging in abuse or exploitation. Social media platforms like Facebook and Instagram can be manipulated by these users to carry out harmful activities. Unlike face-to-face interactions, where private information is shared only after trust is established, social media often leads users to disclose personal details prematurely, even to untrustworthy individuals. Research shows that privacy concerns do not significantly predict the likelihood of an individual to join social medial platforms, and even those who are strongly concerned about privacy often share a substantial personal information [
35]. This low regard for privacy makes users vulnerable to exploitation by malicious users on social medial platforms.
4.2.1. Cyberbullying
Anonymous online spaces often foster hostility, harassment, and intimidation, which can negatively impact on the mental health adolescents, especially if utilized by malicious users. Many young people view social media as a threat, associating it with mood and anxiety disorders, cyberbullying, exposure to harmful content, and even addiction [
36,
37]. Adolescents frequently feel pressured to remain constantly connected, leading to issues like appearance comparisons, idealized body standards, and a constant need for self-expression and validation [
36,
37]. Research has shown that social media significantly affects the mental well-being of children and young people, particularly in terms of self-esteem and body image [
38]. For example, the “Facebook Depression” phenomenon links social media use to lower self-esteem and body concerns.
Cyberbullying affecting a substantial number of youths, ranging from 6.5% to 35.4% [
39], and is often linked to previous experiences of traditional bullying, with girls being more affected than boys [
40]. Cyberbullying risk factors include spending more than three hours online, using webcams, sharing personal information, and engaging in online harassment [
39]. Both victims and perpetrators of cyberbullying report higher levels of emotional and social difficulties, as well as greater problematic social media use [
40]. Cyberbullying has also been associated with depression and substance use [
39]. On the other hand, social support can act as a protective factor and help mitigate the negative effects of problematic social media use among adolescents in an Italian study [
40].
4.2.2. Promotion of Harmful Ideologies
Malicious users on social media platforms also contribute to the spread of harmful ideologies, including extremist political, religious, and health-related beliefs, that may further promote the
digital pandemic. The dissemination of conspiracy theories, such as those surrounding COVID-19, poses significant public health risks, as these theories influence harmful behaviors. Research has shown that factors including young age, female gender, and using social media as an emotional outlet are associated with a higher likelihood of accepting conspiracy theories [
41]. Additionally, exposure to hate content, whether online and in traditional media, fosters negative attitudes and stereotypes towards targeted groups, hindering efforts to promote positive perceptions and inclusivity [
42]. This exposure can reduce intergroup trust and has been strongly correlated with increased victimization, online hate speech, and even offline violence [
42]. Hate speech on social media platforms also contributes to a hostile environment, which negatively impacts psychological well-being by contributing to symptoms of depression, lower life satisfaction, and heightened social fear [
42]. Furthermore, the normalization of prejudice through biased commentary and hate speech creates an environment where individuals become desensitized to harmful rhetoric. This desensitization helps explain the widespread nature of such content and the resistance to regulating it [
42].
4.2.3. Abuse and Exploitation
Social media, while offering a sense of escape for some adolescents, can also serve as a platform for exploitation and abuse, particularly among vulnerable youth. Malicious users, including online predators, may create psychological profiles to target victims, contributing to the
digital pandemic. One concerning trend is image-based sexual abuse (IBSA), which has become more prevalent with rising adolescent engagement on social media platforms [
43]. A study in Norway found that over a 12-month period, 2.9% of adolescents experienced IBSA, 4.3% experienced physical sexual victimization (PSV), and 1.7% encountered both [
43]. Victims of PSV often exhibit known risk factors such as female gender, exposure to violence, substance-using peer groups, early sexual activity, minority sexual identities, and vulnerability to commercial sexual exploitation [
43]. On the other hand, victims of IBSA do not follow the same risk profile; the abuse affects individuals across genders and socioeconomic backgrounds, indicating a broader vulnerability [
43]. Additionally, sexual grooming has emerged as an escalating concern, driven by the increased digital use of adolescents. Research suggests that the perceived severity of grooming is more influenced by the dynamics of the relationship than by the attractiveness of the predators, where psychological manipulation may be easily applied by malicious users on social media platforms [
44]. As adolescents continue to interact in online spaces, the risk of grooming and other digital forms of sexual exploitation is expected to rise, marking the urgent need for legal and educational interventions.
4.3. Users with Undetected Mental Health Issues
In addition to harmful AI algorithms and malicious users, individuals with undetected mental health issues can also contribute to the
digital pandemic. For example, by sharing self-injurious thoughts or images, users with undetected mental health issues may unintentionally promote harmful behaviors to vulnerable users, especially children and adolescents. Social media, which plays a key role in shaping cognitive and emotional development [
45], can amplify these effects when such content spreads. Adolescents might use these platforms to vent or seek relief, but the display of distressing images or thoughts could influence others in similar emotional states, potentially leading to a chain reaction of harmful behaviors.
Research indicates a significant link between cybervictimization and self-injurious thoughts and behaviors (SITBs), including stronger correlations with suicidal ideation in adolescents [
46]. Internet usage tied to self-harm is often associated with factors like internet addiction, prolonged screen time, and access to self-harm-promoting content [
47]. Additionally, extended screen time can impair emotional regulation and coping abilities [
10], and is linked to increased depressive symptoms—particularly among girls. Preventive interventions that teach emotional regulation and healthier coping strategies could help mitigate these mental health risks among adolescents [
10].
5. Counteract Strategies
The counteract strategies against digital pandemic can be classified into three levels: personal, school and community, and national and global levels. First of all, at the personal level, strategies to foster resilience and enhance self-control should be implemented among adolescents. Secondly, school and the community should play an active role in raising awareness about how to stay safe online, including organizing campaigns against bullying and cyberbullying. Thirdly, at the national and global levels, policymakers, AI experts and mental health professionals should collaborate to developed guidelines for the use of AI in mental health. These efforts should aim to promote trustworthy AI that aligns with societal values while ensuring data privacy and transparency.
5.1. Personal Level: Foster Resilience and Improve Self-Control in Adolescents
One effective strategy to counteract the negative effects of the
digital pandemic on Generation Z is to cultivate resilience and strengthen self-control in adolescents. Psychological resilience refers to the capacity of an individual to recover from adversity, adapt to challenges, and achieve social and academic success despite facing significant stress [
48]. Rather than being an inborn trait limited to a few, resilience is a skill that can be developed through supportive environments and by addressing high-risk behaviors commonly encountered during adolescence [
49]. Resilience can be strengthened via mindfulness practices and stress management skills [
50]. Moreover, having a proactive personality has also been shown to promote resilience. Study showed that a proactive personality is negatively associated with short-form video (SFV) addiction, and that both resilience and self-control partially mediate this relationship, suggesting a chain effect [
51]. Consequently, many interventions targeting internet addiction have emphasized building resilience and improving self-regulation. Equally important is to foster media literacy by equipping young people with skills to critically evaluate online content, recognize manipulative digital strategies and mange screen time effectively. Mental health promotion programs including mindfulness practices, coping and help-seeking skills training and stress management should be implemented in school curriculum along with media literacy courses. Parenting skills programs should also incorporate stress management and media literacy programs to help guide parental supervision in the digital age.
5.2. School and Community Level: Anti-(Cyber)bullying with the Whole School Involvement
The second level of strategy against
digital pandemic is to have school and community involved. For example, whole-school anti-(cyber)bullying interventions have shown to be more effective at reducing bullying than approaches focused solely on classroom instruction or social skills training [
52]. Anti-(cyber)bullying strategies that integrate school-wide policies, teacher training, classroom activities, and individualized counseling have shown greater success than curriculum-based efforts alone [
53]. This comprehensive model addresses multiple levels of school life, creating a more cohesive and consistent environment for tackling (cyber)bullying.
Organizations such as UNESCO and the World Anti-Bullying Forum have championed this whole-education approach, recognizing the importance of integrating education with broader societal, technological, and community influences [
54]. On example is the Cyberbullying Effects Prevention (CREEP) project, which supports early detection of cyberbullying through digital tools and offers coaching to all affected parties—victims, perpetrators, and bystanders [
54]. The initiative has gained positive reception and is currently being tested in multiple European countries. Schools and communities should implement programs such as CREEP to help decrease incidents of cyberbullying.
5.3. National and Global Level: Risk Mitigation Strategies
To promote ethical AI use in adolescent mental health, the third level involves the collaboration between national and global agencies to bring forward comprehensive guidelines and enforcement mechanisms. Mental health professionals and technology experts should work together to remove harmful content from training data, advocate for stricter social media regulations, and ensure transparency and accountability in AI systems. Customization of AI responses to suit individual needs is critical to avoid reinforcing harmful behaviors or fostering dependency. Ethical standards must prioritize robust filtering, clear protocols for crisis management, and responsible AI interactions, particularly in sensitive areas like self-harm and body image (
Figure 2).
At the international level, current challenges include inconsistent regulation, lack of global coordination, and lack of data transparency. Initiatives from organizations like the United Nation (UN) and Organization for Economic Cooperation and Development (OECD) highlighted the need for inclusive, transparent, and accountable AI governance. Key recommendations include global cooperation on AI governance, data transparency, and risk assessment through scientific panels [
55,
56]. The OECD AI Principles and national AI strategies from countries like the UK, France, and Finland emphasize ethical oversight, public consultation, and alignment with societal values [
56,
57,
58,
59]. Moreover, harmonization of regulatory sandboxes (experimental legal frameworks) across neighboring countries regarding the use of AI in mental health or medicine in general should be addressed when developing such guidelines. This could help promote interoperability, data-sharing and enhance both innovation and safety. Therefore, there should be a call for international legal frameworks, cross-border ethical oversight, and the development of explainable AI to enhance public trust and accountability.
6. A Call for Collaborative Action
Digital pandemic is not only a significant challenge but also an opportunity for transformative change. It presents a chance to shape a future where technology acts as a force of good, enhancing rather than compromising the mental health of future generations. Here we proposed that resilience building programs including mindfulness, stress management, coping skills development along with media literacy should be implemented in school curriculums and parenting programs to boost the immunity against digital pandemic on the individual level. We also proposed that anti-cyberbullying programs should be integrated with school-wide policies, teacher training, classroom activities, and individualized counseling. Lastly, we propose that collaboration is needed at both national and global levels to provide a guideline for the use of AI in mental health, including harmonization of regulatory sandboxes to help interoperability, data-sharing and enhance both innovation and safety. Moreover, future discussions should also address issues of technical regulation of AI at the level of standardization and certification, and specific legal consequences of the illegal use of AI. The success of this vision depends on collaboration among mental health professionals, computer scientists, educators, parents, and policymakers. Together, we must work to create a digital ecosystem that fosters well-being, nurtures creativity and encourages positive social connections.
Author Contributions
Conceptualization, J.P.-C.C. and K.-P.S.; Writing-Original Draft Preparation, J.P.-C.C.; Writing-Review & Editing, J.P.-C.C., S.-W.C., S.M.-J.C. and K.-P.S., Supervision, K.-P.S. All authors have accepted responsibility for the entire content of this manuscript and consented to its submission to the journal, reviewed all the results and approved the final version of the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Acknowledgments
The authors of this work were supported by the following grants: MOST 111-2321-B-006-008, NSTC 111-2314-B-039-041-MY3, NSTC 113-2923-B-039-001-MY3, NSTC 113-2314-B-039-048, NSTC 113-2314-B-039-046, 113-2320-B-281-001- from the National Science and Technology Council (NSTC), Taiwan; ANHRF 112-24, 112-47, 113-24, 113-38, 113-40, 113-48, 113-51, 113-10 from An-Nan Hospital, China Medical University (CMU), Tainan, Taiwan; CMRC-CMA-2 from Higher Education Sprout Project by the Ministry of Education (MOE), Taiwan; CMU112-AWARD-01 from the CMU, Taichung, Taiwan; and DMR-112-097, 112-086, 112-109, 112-232, 113-210, 113-003, 114-050, 114-187, 114-143 from the CMU Hospital, Taichung, Taiwan.
Conflicts of Interest
Author Steve Ming-Jang Chang declares no conflicts regarding company Trend Micro Inc. The other authors declare that he has no financial or other relationships that might lead to a conflict of interest.
References
- Turner, A. Generation Z: Technology and Social Interest. J. Individ. Psychol. 2015, 71, 103–113. [Google Scholar] [CrossRef]
- Backlinko Team. Social Media Usage & Growth Statistics. Available online: https://backlinko.com/social-media-users (accessed on 7 March 2025).
- Adelantado-Renau, M.; Moliner-Urdiales, D.; Cavero-Redondo, I.; Beltran-Valls, M.R.; Martinez-Vizcaino, V.; Alvarez-Bueno, C. Association Between Screen Media Use and Academic Performance Among Children and Adolescents: A Systematic Review and Meta-analysis. JAMA Pediatr. 2019, 173, 1058–1067. [Google Scholar] [CrossRef] [PubMed]
- Twenge, J.M.; Cooper, A.B.; Joiner, T.E.; Duffy, M.E.; Binau, S.G. Age, period, and cohort trends in mood disorder indicators and suicide-related outcomes in a nationally representative dataset, 2005–2017. J. Abnorm. Psychol. 2019, 128, 185–199. [Google Scholar] [CrossRef] [PubMed]
- Twenge, J.M.; Hisler, G.C.; Krizan, Z. Associations between screen time and sleep duration are primarily driven by portable electronic devices: Evidence from a population-based study of U.S. children ages 0–17. Sleep Med. 2019, 56, 211–218. [Google Scholar] [CrossRef]
- Haidt, J. The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness; Penguin Press: London, UK, 2024. [Google Scholar]
- Mateu, A.; Pascual-Sanchez, A.; Martinez-Herves, M.; Hickey, N.; Nicholls, D.; Kramer, T. Cyberbullying and post-traumatic stress symptoms in UK adolescents. Arch. Dis. Child 2020, 105, 951–956. [Google Scholar] [CrossRef] [PubMed]
- Su, K.P. Harmonizing the inner orchestra: The impact of urbanization and evolution of stress, inflammation, diet, and lifestyles in depression. Curr. Opin. Psychiatry 2024, 38, 209–216. [Google Scholar] [CrossRef]
- Kelly, C.A.; Sharot, T. Web-browsing patterns reflect and shape mood and mental health. Nat. Hum. Behav. 2025, 9, 133–146. [Google Scholar] [CrossRef]
- Hokby, S.; Westerlund, J.; Alvarsson, J.; Carli, V.; Hadlaczky, G. Longitudinal Effects of Screen Time on Depressive Symptoms among Swedish Adolescents: The Moderating and Mediating Role of Coping Engagement Behavior. Int. J. Environ. Res. Public Health 2023, 20, 3771. [Google Scholar] [CrossRef]
- Zuiderveen Borgesius, F.J.; Trilling, D.; Moller, J.; Bodo, B.; de Vreese, C.; Helberger, N. Should we worry about filter bubbles. Internet Policy Rev. 2016, 5. [Google Scholar] [CrossRef]
- Pariser, E. What the Internet Is Hiding from You; Penguin Group: London, UK, 2011. [Google Scholar]
- Thorisdottir, I.E.; Sigurvinsdottir, R.; Asgeirsdottir, B.B.; Allegrante, J.P.; Sigfusdottir, I.D. Active and Passive Social Media Use and Symptoms of Anxiety and Depressed Mood Among Icelandic Adolescents. Cyberpsychol. Behav. Soc. Netw. 2019, 22, 535–542. [Google Scholar] [CrossRef]
- Jamieson, K.H.; Cappella, J.N. Echo Chamber: Rush Limbaugh and the Conservative Media Establishment; Oxford University Press: New York, NY, USA, 2008. [Google Scholar]
- Beam, M.A. Automating the News: How Personalized News Recommender System Design Choices Impact News Reception. Commun. Res. 2014, 41, 1019–1041. [Google Scholar] [CrossRef]
- Stroud, N.J. Polarization and Partisan Selective Exposure. J. Commun. 2010, 60, 556–576. [Google Scholar] [CrossRef]
- Garrett, R.K. Echo chambers online?: Politically motivated selective exposure among internet news users. J. Comput. -Mediat. Commun. 2009, 14, 265–285. [Google Scholar] [CrossRef]
- Lyengar, S.; Hahn, K.S. Red media, blue media: Evidence of ideological selectivity in media use. J. Commun. 2009, 59, 19–39. [Google Scholar] [CrossRef]
- Ariana, H.; Almuhtadi, I.; Natania, N.J.; Handayani, P.W.; Bressan, S.; Larasati, P.D. Influence of TikTok on Body Satisfaction Among Generation Z in Indonesia: Mixed Methods Approach. JMIR Hum. Factors 2024, 11, e58371. [Google Scholar] [CrossRef]
- Sanzari, C.M.; Gorrell, S.; Anderson, L.M.; Reilly, E.E.; Niemiec, M.A.; Orloff, N.C.; Anderson, D.A.; Hormes, J.M. The impact of social media use on body image and disordered eating behaviors: Content matters more than duration of exposure. Eat Behav. 2023, 49, 101722. [Google Scholar] [CrossRef]
- Cohen, R.; Irwin, L.; Newton-John, T.; Slater, A. #bodypositivity: A content analysis of body positive accounts on Instagram. Body Image 2019, 29, 47–57. [Google Scholar] [CrossRef]
- Hallward, L.; Feng, O.; Duncan, L.R. An exploration and comparison of #BodyPositivity and #BodyNeutrality content on TikTok. Eat Behav. 2023, 50, 101760. [Google Scholar] [CrossRef]
- Quattrociocchi, W. How does misinforamtion spread online. In Proceedings of the World Economic Forum, Davos-Klosters, Switzerland, 20–23 January 2016. [Google Scholar]
- Quattrociocchi, W.; Caldarelli, G.; Scala, A. Opinion dynamics on interacting networks: Media competition and social influence. Sci. Rep. 2014, 4, 4938. [Google Scholar] [CrossRef]
- Del Vicario, M.; Vivaldo, G.; Bessi, A.; Zollo, F.; Scala, A.; Caldarelli, G.; Quattrociocchi, W. Echo Chambers: Emotional Contagion and Group Polarization on Facebook. Sci. Rep. 2016, 6, 37825. [Google Scholar] [CrossRef]
- Lakatos, D.; Kovacs, B.; Czigler, I.; Demetrovics, Z.; File, D. Wanting and liking of Facebook functions and their correlation to problematic use. Sci. Rep. 2025, 15, 3643. [Google Scholar] [CrossRef] [PubMed]
- Pedrouzo, S.B.; Krynski, L. Hyperconnected: Children and adolescents on social media. The TikTok phenomenon. Arch. Argent Pediatr. 2023, 121, e202202674. [Google Scholar] [CrossRef] [PubMed]
- Montag, C.; Yang, H.; Elhai, J.D. On the Psychology of TikTok Use: A First Glimpse From Empirical Findings. Front. Public Health 2021, 9, 641673. [Google Scholar] [CrossRef]
- Henzel, V.; Hakansson, A. Hooked on virtual social life. Problematic social media use and associations with mental distress and addictive disorders. PLoS ONE 2021, 16, e0248406. [Google Scholar] [CrossRef]
- Boer, M.; Stevens, G.; Finkenauer, C.; van den Eijnden, R. Attention Deficit Hyperactivity Disorder-Symptoms, Social Media Use Intensity, and Social Media Use Problems in Adolescents: Investigating Directionality. Child Dev. 2020, 91, e853–e865. [Google Scholar] [CrossRef] [PubMed]
- Thorell, L.B.; Buren, J.; Strom Wiman, J.; Sandberg, D.; Nutley, S.B. Longitudinal associations between digital media use and ADHD symptoms in children and adolescents: A systematic literature review. Eur. Child Adolesc. Psychiatry 2024, 33, 2503–2526. [Google Scholar] [CrossRef]
- Boer, M.; Stevens, G.W.J.M.; Finkenauer, C.; de Looze, M.E.; van den Eijnden, R.J.J.M. Social media use intensity, social media use problems and mental health among adolesscents: Investigating directionality and mediating processes. Comput. Hum. Behav. 2021, 116, 106645. [Google Scholar] [CrossRef]
- van den Eijnden, R.; Koning, I.; Doornwaard, S.; van Gurp, F.; Ter Bogt, T. The impact of heavy and disordered use of games and social media on adolescents’ psychological, social, and school functioning. J. Behav. Addict. 2018, 7, 697–706. [Google Scholar] [CrossRef]
- Armstrong-Carter, E.; Garrett, S.L.; Nick, E.A.; Prinstein, M.J.; Telzer, E.H. Momentary links between adolescents’ social media use and social experiences and motivations: Individual differences by peer susceptibility. Dev. Psychol. 2023, 59, 707–719. [Google Scholar] [CrossRef]
- Acquisti, A.; Gross, R. Imagined commnities: Awareness, information sharing, and privacy on the Facebook. In Privacy Enhancing Technologies; Danezis, D.G., Gollen, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 36–58. [Google Scholar]
- O’Reilly, M.; Dogra, N.; Whiteman, N.; Hughes, J.; Eruyar, S.; Reilly, P. Is social media bad for mental health and wellbeing? Exploring the perspectives of adolescents. Clin. Child Psychol. Psychiatry 2018, 23, 601–613. [Google Scholar] [CrossRef]
- Popat, A.; Tarrant, C. Exploring adolescents’ perspectives on social media and mental health and well-being—A qualitative literature review. Clin. Child Psychol. Psychiatry 2023, 28, 323–337. [Google Scholar] [CrossRef] [PubMed]
- Richards, D.; Caldwell, P.H.; Go, H. Impact of social media on the health of children and young people. J. Paediatr. Child Health 2015, 51, 1152–1157. [Google Scholar] [CrossRef]
- Bottino, S.M.; Bottino, C.M.; Regina, C.G.; Correia, A.V.; Ribeiro, W.S. Cyberbullying and adolescent mental health: Systematic review. Cad. Saude Publica 2015, 31, 463–475. [Google Scholar] [CrossRef]
- Borraccino, A.; Marengo, N.; Dalmasso, P.; Marino, C.; Ciardullo, S.; Nardone, P.; Lemma, P.; The 2018 HBSC-Italia Group. Problematic Social Media Use and Cyber Aggression in Italian Adolescents: The Remarkable Role of Social Support. Int. J. Environ. Res. Public Health 2022, 19, 9763. [Google Scholar] [CrossRef]
- Tsamakis, K.; Tsiptsios, D.; Stubbs, B.; Ma, R.; Romano, E.; Mueller, C.; Ahmad, A.; Triantafyllis, A.S.; Tsitsas, G.; Dragioti, E. Summarising data and factors associated with COVID-19 related conspiracy theories in the first year of the pandemic: A systematic review and narrative synthesis. BMC Psychol. 2022, 10, 244. [Google Scholar] [CrossRef] [PubMed]
- Madriaza, P.; Hassan, G.; Brouillette-Alarie, S.; Mounchingam, A.N.; Durocher-Corfa, L.; Borokhovski, E.; Pickup, D.; Paille, S. Exposure to hate in online and traditional media: A systematic review and meta-analysis of the impact of this exposure on individuals and communities. Campbell Syst. Rev. 2025, 21, e70018. [Google Scholar] [CrossRef]
- Pedersen, W.; Bakken, A.; Stefansen, K.; von Soest, T. Sexual Victimization in the Digital Age: A Population-Based Study of Physical and Image-Based Sexual Abuse Among Adolescents. Arch. Sex. Behav. 2023, 52, 399–410. [Google Scholar] [CrossRef]
- Shappley, A.G.; Walker, R.V. The Role of Perpetrator Attractiveness and Relationship Dynamics on Perceptions of Adolescent Sexual Grooming. J. Child Sex. Abus. 2024, 33, 1048–1065. [Google Scholar] [CrossRef] [PubMed]
- Pennycook, G.; Rand, D.G. The Psychology of Fake News. Trends Cogn. Sci. 2021, 25, 388–402. [Google Scholar] [CrossRef]
- Nesi, J.; Burke, T.A.; Bettis, A.H.; Kudinova, A.Y.; Thompson, E.C.; MacPherson, H.A.; Fox, K.A.; Lawrence, H.R.; Thomas, S.A.; Wolff, J.C.; et al. Social media use and self-injurious thoughts and behaviors: A systematic review and meta-analysis. Clin. Psychol. Rev. 2021, 87, 102038. [Google Scholar] [CrossRef]
- Marchant, A.; Hawton, K.; Stewart, A.; Montgomery, P.; Singaravelu, V.; Lloyd, K.; Purdy, N.; Daine, K.; John, A. A systematic review of the relationship between internet use, self-harm and suicidal behaviour in young people: The good, the bad and the unknown. PLoS ONE 2017, 12, e0181722. [Google Scholar] [CrossRef] [PubMed]
- Henderson, N.; Milstein, M.M. Resiliency in Schools: Making It Happen for Students and Educators; Corwin: Thousand Oaks, CA, USA, 2003. [Google Scholar]
- Herrman, H.; Stewart, D.E.; Diaz-Granados, N.; Berger, E.L.; Jackson, B.; Yuen, T. What is resilience? Can. J. Psychiatry 2011, 56, 258–265. [Google Scholar] [CrossRef] [PubMed]
- Fenwick-Smith, A.; Dahlberg, E.E.; Thompson, S.C. Systematic review of resilience-enhancing, universal, primary school-based mental health promotion programs. BMC Psychol. 2018, 6, 30. [Google Scholar] [CrossRef]
- Wu, S.; Shafait, Z.; Bao, K. The relationship between proactive personality and college students’ short-form video addiction: A chain mediation model of resilience and self-control. PLoS ONE 2024, 19, e0312597. [Google Scholar] [CrossRef]
- Cantone, E.; Piras, A.P.; Vellante, M.; Preti, A.; Danielsdottir, S.; D’Aloja, E.; Lesinskiene, S.; Angermeyer, M.C.; Carta, M.G.; Bhugra, D. Interventions on bullying and cyberbullying in schools: A systematic review. Clin. Pr Epidemiol. Ment. Health 2015, 11, 58–76. [Google Scholar] [CrossRef] [PubMed]
- Shackleton, N.; Jamal, F.; Viner, R.; Dickson, K.; Hinds, K.; Patton, G.; Bonell, C. Systematic review of reviews of observational studies of school-level effects on sexual health, violence and substance use. Health Place 2016, 39, 168–176. [Google Scholar] [CrossRef]
- Gabrielli, S.; Rizzi, S.; Carbone, S.; Piras, E.M. School Interventions for Bullying-Cyberbullying Prevention in Adolescents: Insights from the UPRIGHT and CREEP Projects. Int. J. Env. Res. Public. Health 2021, 18, 1697. [Google Scholar] [CrossRef]
- UN AI Advisory Body. Governing AI for Humanity; United Nations: New York, NY, USA, 2024; pp. 1–100.
- OECD. OECD AI Principles Overview. Available online: https://www.oecd.org/en/topics/ai-principles.html (accessed on 10 May 2025).
- U.K. Government, Guidance: Understanding Artificial Intelligence Ethics and Safety. Available online: https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety (accessed on 10 May 2025).
- Villani, C. France AI Strategy Report. Available online: https://ai-watch.ec.europa.eu/countries/france/france-ai-strategy-report_en (accessed on 10 May 2025).
- Madiega, T. EU Guidelines on Ethics in Artificial Intelligence: Context and Implementation; European Parliament: Strasbourg, France, 2019.
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).