Next Article in Journal
GT-STAFG: Graph Transformer with Spatiotemporal Attention Fusion Gate for Epileptic Seizure Detection in Imbalanced EEG Data
Next Article in Special Issue
Leveraging ChatGPT in K-12 School Discipline: Potential Applications and Ethical Considerations
Previous Article in Journal
A Comparative Performance Analysis of Locally Deployed Large Language Models Through a Retrieval-Augmented Generation Educational Assistant Application for Textual Data Extraction
Previous Article in Special Issue
Understanding Social Biases in Large Language Models
 
 
Review
Peer-Review Record

Navigating the Digital Maze: A Review of AI Bias, Social Media, and Mental Health in Generation Z

by Jane Pei-Chen Chang 1,2,3, Szu-Wei Cheng 1, Steve Ming-Jang Chang 4 and Kuan-Pin Su 1,3,5,*
Reviewer 1:
Reviewer 2: Anonymous
Submission received: 21 April 2025 / Revised: 27 May 2025 / Accepted: 4 June 2025 / Published: 6 June 2025
(This article belongs to the Special Issue AI Bias in the Media and Beyond)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Dear Authors!
Within the framework of the presented research, it is recommended to refine and expand the following provisions:
1. Identify the research methodology in a separate block. Currently, this section is missing, the goals and objectives of the study, its boundaries and the predicted result are unclear.
1.1. It is recommended to provide a more detailed conceptual distinction between GenAI, AI, LLM. (I strongly recommend it, since there are already different rules for regulating these technologies, including at these levels)
1.2. Justify the selected pool of literature (the authors used the most cited works or the choice was made on the basis of other indicators).
1.3. Clarify the geographical studies of the various selected empirical data, as well as perform a statistical review of each section directly (graphs, etc.)
2.1. It is necessary to analyze the documents of the UN (especially the report on the activities of the UN High-Level Advisory Forum on Artificial Intelligence), OECD, national states (within the framework of which surveys are conducted (including in the fourth editions of ethical recommendations) on the features of these standards in relation to II. most of the declared problems, long declared, as well as their solution.
2.2. Issues of technical regulation at the level of standardization and certification are not considered. (recommendation that is not mandatory)
2.3. With regard to the medical use of AI, it is necessary to change the rules for regulating experimental legal regimes (regulatory sandboxes) regarding the use of AI in neighboring countries.
2.4. Specific legal consequences (with examples in accordance with the sections) of the illegal use of AI and the definition of danger are not described.
3. Concretize the conclusions of the proposals and actions.

Author Response

Comments from Reviewer 1:
1. Identify the research methodology in a separate block. Currently, this section is missing, the goals and objectives of the study, its boundaries and the predicted result are unclear.

 

A: Thank you for the comment. We have the goals and objectives of the review in the Introduction to be informative to the audience that this is a review rather than an original study. Please see page 1 line 19 in the Abstract and Page 2 lines 53-54 in the Introduction. We have also added a section on the methodology on Page 3 lines 59-66 in Section 2.

 

 “This review delves into the intersections of AI bias, social medial engagement…. (Abstract on page 1)

 

“This review is aimed to cover the potential mechanisms that may contribute the digital pandemic and also propose potential strategies that will help counteract it. (Introduction on page 2)

 

PubMed was used to search for studies examining the potential mechanisms that may be driving the newly introduced concept, digital pandemic, using keywords “screen media use” in combination with “adolescents,” “artificial intelligence algorithms,” “behavioral conditioning,” “cyberbullying,” “echo chambers,” “filter bubbles,” “internalizing/externalizing,” “internet addiction,” “mental health,” “online predators,” “screen time,” “suicide, “ “suicide ideation,” and “self-harm.” All relevant articles, including experimental research, cross-sectional and longitudinal epidemiological studies, and systematic reviews, were reviewed and synthesized in this integrative review… (Section 2 on Page 3)


1.1. It is recommended to provide a more detailed conceptual distinction between GenAI, AI, LLM. (I strongly recommend it, since there are already different rules for regulating these technologies, including at these levels)

 

A: Thank you for the comment. We have provided a more detailed conceptual distinction between GenAI, AI and LLM in the Introduction. Please see page 2 lines 44-46.

 

 “…the role of Generative Artificial Intelligence (AI)—including large language models (LLMs)—in shaping the digital ecosystem. While LLMs focus on generating human-like text [8], generative AI more broadly creates data across various media formats such as images, audio, video, and code...” (Introduction on page 2)


1.2. Justify the selected pool of literature (the authors used the most cited works or the choice was made on the basis of other indicators).

 

A: We have cited the works in the review based on our search in PubMed.


1.3. Clarify the geographical studies of the various selected empirical data, as well as perform a statistical review of each section directly (graphs, etc.)

 

A: Thank you for the comment. We have clarified the geographic context of studies included in our paper. Please see page 1 lines 33-34 in Introduction, page 4 lines 110-112 in Section 4.1.1, page 5 lines 147 in Section 4.1.3, and page 7 lines 236 in Section 4.2.3.

 

“…now totaling approximately 5.24 billion—actively engage with social media, with adolescents spending an average of 141 minutes daily on these platforms worldwide…” (Introduction on Page 1)

 

“A study involving 1,145 participants mainly from U.S. and U.K. examined relationship between information-seeking behavior and mental health…” (in Section 4.1.1 on Page 4)

 

“A study comparing U.S. undergraduate students …” (Section 4.1.3 on Page 5)

 

“A study in Norway found that over a 12-month period, 2.9%…” (Section 4.2.3 on Page 7)


2.1. It is necessary to analyze the documents of the UN (especially the report on the activities of the UN High-Level Advisory Forum on Artificial Intelligence), OECD, national states (within the framework of which surveys are conducted (including in the fourth editions of ethical recommendations) on the features of these standards in relation to II. most of the declared problems, long declared, as well as their solution.

 

A:  Thank you for the valuable contents, we have added the information on the documents of UN of the UN High-Level Advisory Forum on Artificial Intelligence, OECD, national states on page 9 lines 326-333 in Section 5.3.

 

“..current challenges include inconsistent regulation, lack of global coordintation, and lack of data transparancy. Initiatives from organizations like the United Nation (UN) and Organization for Economic Cooperation and Development (OECD) highlighted the need for inclusive, transparent, and accountable AI governance. Key recommendations include global cooperation on AI governance, data transparency, and risk assessment through scientific panels [55, 56]. The OECD AI Principles and national AI strategies from countries like the UK, France, and Finland emphasize ethical oversight, public consultation, and alignment with societal values… (Section in 5.3 on Page 9)


2.2. Issues of technical regulation at the level of standardization and certification are not considered. (recommendation that is not mandatory)

 

A: We believe this is valuble comment and deserve more attention, but due to the limitation of the scope of our review, we have addressed this in the last section calling for future studies to specifically focus theses issues. Please see page 9 lines 353-354 in Section 6.

 

“…future discussions should also addressissues of technical regulation of AI at the level of standardization and certification …” (Section 6 on Page 9)


2.3. With regard to the medical use of AI, it is necessary to change the rules for regulating experimental legal regimes (regulatory sandboxes) regarding the use of AI in neighboring countries.

 

A:Yes, we believe that is is necessary to change the rules for regulating experiemental legal regimes (regulatory sandboxes) regarding the use of AI in neighboring countries when it comes to medical use of AI. By harmonizing the regulatory sandboxes acrooss neighboring countries will help promote interoperability, data-sharing and enhance both innovation and safety. We have added this in page 9 lines 333-337 in Section 5.3.

 

“…hamonization of regulatory sandboxes (experimental legal gramworks) across neighboring countries regarding the use of AI in mental health or medicine in general should be addressed in the development of such guidelines to help promote interoperability, data-sharing and enhance both innovation and safety…” (Section 5.3 on Page 9)


2.4. Specific legal consequences (with examples in accordance with the sections) of the illegal use of AI and the definition of danger are not described.

 

A: We believe this is valuble comment and deserve more attention, but due to the limitation of the scope of our review, we have addressed this in the last section calling for future studies to specifically focus theses issues on page 9 lines 353-355 in Section 6.

 

“…future discussions should also address … and specific legal consequences of the illegal use of AI… (Section 6 on Page 9).


  1. Concretize the conclusions of the proposals and actions.

 

A: Thank you for your comment. We have concretized the conclusions of the proposals and actions on page 9 lines 345-355 in Section 6.

 

“…here we proposed that resilience building programs including mindfulness, stress management, coping skills development along with media literacy should be implemented in school curriculums and parenting programs to boost the immunity against digital pandemic on the individual level. We also proposed that anti-cyberbullying programs should be integrated with school-wide policies, teacher training, classroom activities, and individualized counseling. Lastly, we propose that collaboration is needed at both national and global levels to provide a guideline for the use of AI in mental health, including harmonization of regulatory sandboxes to help interoperability, data-sharing andenhance both innovation and safety. Moreover, future discussions should also address issues of technical regulation of AI at the level of standardization and certification, and specific legal consequences of the illegal use of AI…” (Section 6 on Page 9).

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

This manuscript offers a timely and valuable review on the intersection of AI-driven content algorithms, social media behavior, and mental health challenges in Generation Z. The topic is highly relevant, and the literature is well selected and thoughtfully discussed.

To strengthen the paper, I suggest the following improvements:

  1. Revise the manuscript for grammar, sentence structure, and clarity. Some sections are repetitive or difficult to follow.

  2. Clarify and formalize key concepts early on — particularly the idea of the “digital pandemic.”

  3. Improve the structure of Section 3 to ensure smoother transitions and clearer sub-section goals.

  4. Consider reducing redundancy in cited studies and integrating more diverse global perspectives.

  5. A visual summary (e.g., a framework or conceptual model) would enhance the paper’s structure and impact.

With these refinements, the manuscript will be well-positioned to contribute meaningfully to the literature on AI, youth behavior, and digital mental health.

Comments for author File: Comments.pdf

Author Response

Comments from Reviewer 2

  1. General Comments

This review article explores the growing intersection between AI algorithms, social media engagement, and mental health outcomes in Generation Z. The paper is both timely and relevant, addressing urgent psychological and technological concerns in the digital era. The authors have compiled an extensive range of studies from psychology, digital media, and AI ethics to form a compelling narrative on the risks and countermeasures associated with algorithmic bias.

A: Thank you for you encouraging comments.

The manuscript is rich in content and generally well organized.

A: Thank you for your positive comment.

 

  1. Major Comments
  2. Language and Grammar
  • The manuscript would benefit from professional English editing. There are frequent grammar issues, awkward phrasings, and redundant phrases (e.g., “negative emotional responses…further triggering negative emotions”).

A: Thank you for your comment. We have performed profession English editing on the paper.

  • Some sections read like a summary of previous works rather than a focused academic review. Sentences are sometimes long, leading to reader fatigue.

A: Thank you for your comment, we have revised the paper and performed professional English editing on the paper.

 

  1. Structure and Framing
  • Section 3 is the core of the article but lacks clear substructure. The transitions between 3.1, 3.2, and 3.3 should be clearer.

A: Thank you for your comment, we have revise section 3 and made the transitions between 4.1, 4.2 and 4.3. Please page 4 lines 102-105 in Section 4.1, page 5 lines 120-121 in Section 4.1.2,  lines 143 in Section 4.1.3, lines 164-165 in Section 4.1.4, page 6 lines 182-185 in Section 4.2, lines 193-195 in Section 4.2.1, lines 214-216, page 7 lines 231-234 in Section 4.2.3, lines 1079-1080. The revised sections are also bolded.

“…AI algorithm may contribute to the digital pandemic by triggering emotions in the users and amplifying negativity of harmful contents on social media, and also reinforce behavioral conditioning that may further foster addictive behaviors via echo chambers and filter bubbles… ( Section 4.1 on Page 4)

“…AI algorithms can also contribute to the digital pandemic by creating "filter bubbles" and "echo chambers."…” (Section 4.1.2 on Page 5)

“…AI algorithm may also contribute to digital pandemic via amplification fo negativity...” (Section 4.1.3 on Page 5)

“…behavioral conditioning is another mechanism utilized by AI algorithm in promoting digital pandemic…” (Section 4.1.4 on Page 5)

“…in addition to AI algorithms, malicious users contribute to the digital dynamic through cyberbullying, promoting harmful ideologies, and engaging in abuse or exploitation. Social media platforms like Facebook and Instagram can be manipulated by these users to carry out harmful activities…” (Section 4.2 on Page 6)

“…anonymous online spaces often foster hostility, harassment, and intimidation, which can negatively impact on the mental health adolescents, especially if utilized by malicious users…” (Section 4.2.1 on Page 6)

“…malicious users on social media platforms also contribute to the spread of harmful ideologies, including extremist political, religious, and health-related beliefs, that may further promote the digital pandemic…” (Section 4.2.2 on Page 6)

“…social media, while offering a sense of escape for some adolescents, can also serve as a platform for exploitation and abuse, particularly among vulnerable youth. Malicious users, including online predators, may create psychological profiles to target victims, contributing to the digital pandemic…” (Section 4.2.3 on Page 7)

“…in addition to harmful AI algorithms and malicious users, individuals with undetected mental health issues can also contribute to the digital pandemic…” (Section 4.2.3 on Page 7)

  • The term “digital pandemic” is used effectively, but its definition and conceptual novelty should be introduced earlier and more formally. A visual summary of this concept (e.g., flowchart or model) would enhance clarity.

A: Thank you for your comment. We have introduced the term “digital pandemic” earlier in the introduction, please see page 1 line 40 and page 2 line41 in Introduction. We have also provided a visual summary of this concept to help enhance clarity.

“…the digital pandemic—marked by the pervasive use of social media and smartphones…”  (Introduction on Pages 1 and 2).

 

  1. Novelty and Conceptual Value
  • The paper relies heavily on previously published evidence but offers limited new synthesis or frameworks. The authors should consider formalizing a framework/model of risk and mitigation in social media–AI interactions and mental health.

A: Thank you for the valuable comment. We have formalized a framework of risk and mitigation in social media focused on AI interactions and mental health. Please see page 3 lines 68-75 in Section 2, lines 77-80 in Section 2, and page 9 lines 345-355 in Section 6.

 

“…Phone-centered lifestyles and social media platforms, driven by attention-optimizing AI algorithms, often exploit psychological tendencies by amplifying negative emotions such as hatred, fear, jealousy, and discrimination [6]. This dynamic fosters a toxic digital environment that can profoundly impact the physical, psychological, and social well-being of youth, whose mental and physical developments are still ongoing. Additionally, cyberbullying has emerged as a widespread issue, enabled by the anonymity of online interactions and that has been linked to higher incidences of depression, anxiety, and even suicidal thoughts…” (Section 2 on Page 3)

 

“…excessive screen time and the addictive nature of social media further contribute to disruptions in sleep, reduced physical activity and declining overall mental and physical health [9]. This concerning trend mirrors the dynamics of a digital pandemic, where harmful mechanisms spread like a viral outbreak, affecting young minds and bodies…” (Section 2 on Page 3)

“…we proposed that resilience building programs including mindfulness, stress management, coping skills development along with media literacy should be implemented in school curriculums and parenting programs to boost the immunity against digital pandemic on the individual level. We also proposed that anti-cyberbullying programs should be integrated with school-wide policies, teacher training, classroom activities, and individualized counseling. Lastly, we propose that collaboration is needed at both national and global levels to provide a guideline for the use of AI in mental health, including harmonization of regulatory sandboxes to help interoperability, data-sharing and enhance both innovation and safety. Moreover, future discussions should also address issues of technical regulation of AI at the level of standardization and certification, and specific legal consequences of the illegal use of AI…” (Section 6 on Page 9).

 

 

  • Section 4 is practical and appreciated, but the “Call for Collaborative Action” in Section 4.4 could be more targeted with actionable proposals.

A: Thank you for the comment, we have revised section t and provided actionable proposals. Please see page 9 lines 345-355 in Section 6.

 

“…here we proposed that resilience building programs including mindfulness, stress management, coping skills development along with media literacy should be implemented in school curriculums and parenting programs to boost the immunity against digital pandemic on the individual level. We also proposed that anti-cyberbullying programs should be integrated with school-wide policies, teacher training, classroom activities, and individualized counseling. Lastly, we propose that collaboration is needed at both national and global levels to provide a guideline for the use of AI in mental health, including harmonization of regulatory sandboxes to help interoperability, data-sharing andenhance both innovation and safety. Moreover, future discussions should also address issues of technical regulation of AI at the level of standardization and certification, and specific legal consequences of the illegal use of AI…” (Section 6 on Page 9).

 

  1. References
  • The literature coverage is wide-ranging and current. However, the authors could include more global perspectives beyond Western and Taiwanese contexts to strengthen generalizability.

A: Thank you for the comment, we have included more references to strengthen generalizability. Please see Page 9 lines 328-333 in Section 5.3 and Page 13 lines 498-507.

“…United Nation (UN) and Organization for Economic Cooperation and Development (OECD) highlighted the need for inclusive, transparent, and accountable AI governance. Key recommendations include global cooperation on AI governance, data transparency, and risk assessment through scientific panels [55, 56]. The OECD AI Principles and national AI strategies from countries like the UK, France, and Finland emphasize ethical oversight, public consultation, and alignment with societal values [56-59]…” (Section 5.3 on Page 9)

 

Reference

  1. UN AI Advisory Body, Governing AI for Humanity. 2024. p. 1-100.
  2. OECD. OECD AI Principles overview. 2024 [cited 2025 May 10th]; Available from: https://www.oecd.org/en/topics/ai-principles.html.
  3. UK Government. Guidance: Understanding artificial intelligence ethics and safety. 2025 [cited 2025 May 10th]; Available from: https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety.
  4. Villani, C. France AI Strategy Report. 2021 [cited 2025 May 10th]; Available from: https://ai-watch.ec.europa.eu/countries/france/france-ai-strategy-report_en.
  5. Madiega, T., EU guidelines on ethics in artificial intelligence: Context and implementation. 2019, European Parliament. “Reference on Page 13”

 

  • Some studies (e.g., about TikTok, suicide content, and digital addiction) appear multiple times across different subtopics — consider consolidating for conciseness.

A: Thank you for your comment, we have tried to consolidate the studies. Please see Page 5 lines 145-153 in Section 4.1.3., Pages 5 line 167, Page 6 lines 168-176 in Section 4.1.4., Page 7 line 261 and Page 8 lines 262-263 in Section 4.3

 

“…social platforms such as TikTok have been shown to negatively influence the body satisfaction of adolescents [19]. A study comparing U.S. undergraduate students in 2015 and 2022 found that those in 2022 reported higher levels of body image dissatisfaction, increased engagement in vomiting and laxative use, and more time spent on image-based social media platforms like Snapchat, TikTok, and YouTube [20]. The study emphasized that body image issues were linked more to the type of content consumed than to the total time spent online [20]. In addition, although body positivity content promoting self-love has gained popularity in TikTok…” (Section 4.1.3 on Page 5)

“…suggests that cues related to addiction significantly influence such behaviors [26], though this has been under investigated in the context of social media. A study of 1,436 participants found that "wanting" (the desire for social media engagement) was linked to more frequent use and problematic behavior, while "liking" showed weaker and inconsistent correlations [26]. Social-communication features on platforms like Facebook were found to contribute most to addiction, with compulsive use driving behavior more than positive emotions [26]. Social media users often seek validation through likes and comments [27], activating the dopaminergic system and reinforcing addictive behaviors [28]. Research has also found that problematic social media use is associated with younger age, mental distress, and behavioral addictions like gaming and gambling…” (Section 4.1.4 on Pages 5-6)

“…suicidal ideation in adolescents [46]. Internet usage tied to self-harm is often associated with factors like internet addiction, prolonged screen time, and access to self-harm-promoting content…” (Section 4.3 on Pages 7-8)

  1. Minor Comments
  • Add a clear visual abstractor concept map.

A: Thank you for the comment, we have added a visual abstract. Please see visual abstract.

 

  • Improve the consistency of formatting in references (check numbering and punctuation).

 

A: We have checked the numbering and punctuation and improved consistency of formatting in the Reference section.

 

“Turner, A., Generation Z: Technology and Social Interest. J Individ Psychol, 2015. 71(2): p. 103-113.” (Reference on Page 10)

 

“Twenge, J.M., Hisler, G.C., Krizan, Z., Associations between screen time and sleep duration are primarily driven by portable electronic devices: evidence from a population-based study of U.S. children ages 0-17. Sleep Med, 2019. 56: p. 211-218.”  (Reference on Page 10)

 

“Kelly, C.A. and Sharot, T., Web-browsing patterns reflect and shape mood and mental health. Nat Hum Behav, 2025. 9(1): p. 133-146.” (Reference on Page 10)

“Lyengar, S. and Hahn, K.S., Red media, blue media: Evidence of ideological selectivity in media use. J Commun, 2009. 59(1): p. 19-39.” (Reference on Page 11)

 

“Shappley, A.G. and Walker, R.V., The Role of Perpetrator Attractiveness and Relationship Dynamics on Perceptions of Adolescent Sexual Grooming. J Child Sex Abus, 2024. 33(8): p. 1048-1065.”  (Reference on Page 12)

 

“Fenwick-Smith, A., Dahlberg, E.E., Thompson, S.C., Systematic review of resilience-enhancing, universal, primary school-based mental health promotion programs. BMC Psychol, 2018. 6(1): p. 30.” (Reference on Page 13)

 

  • Fix typos: “suers” “users” (Line 38), “medal” “media” (Line 179), “Priacy” “Privacy” (Line 458).

 

A: Thank you for your comments, we have corrected the typos. Please see page 6 lines 173 in Section 4.1.4, page 12 line 252 in Reference.

 

“…and problematic social media behavior… (Page 6 in Section 3.1.4)

 

“…Facebook, in Privacy Enhancing Technologies…” (Page 12 in Reference)

 

  • Consider refining the title- possibly adding “A Review” at the end for clarity and indexing.

 

A: Many thanks for your comment, but we felt adding “A Review” is redundant, as the journal has already put a review section at the beginning of the title. Please see Page 1 line 1.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

Dear authors,

thank you for your work

Author Response

Dear Reviewer: 

  Thank you for your comments!

Reviewer 2 Report

Comments and Suggestions for Authors

Thank you for addressing the previous comments and submitting the revised manuscript.
I have reviewed the updated version, and I find that the manuscript is acceptable in its current form.

Best regards,

Author Response

Thank you for addressing the previous comments and submitting the revised manuscript.
I have reviewed the updated version, and I find that the manuscript is acceptable in its current form.

A: Thank you for your positive comment!

Back to TopTop