You are currently viewing a new version of our website. To view the old version click .
Youth
  • Article
  • Open Access

21 November 2025

Governing Addictive Design Features in AI-Driven Platforms: Regulatory Challenges and Pathways for Protecting Adolescent Digital Wellbeing in China

and
School of Law, China Jiliang University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.

Abstract

Chinese adolescents face significant mental health risks from addictive design features embedded in AI-driven digital platforms. Existing regulations inadequately address design-level addiction triggers in these environments, focusing primarily on content moderation and usage restrictions. This study identifies this gap and offers a novel framework that integrates systems theory and legal governance to regulate feedback loops between adolescents and digital platforms. Using the Adaptive Interaction Design Framework and a three-tiered typology of addictive design features, the research highlights how conceptual ambiguity and institutional fragmentation weaken regulatory efforts, resulting in reactive responses instead of proactive protection. To enhance regulatory effectiveness, this study recommends establishing a risk-tiered precautionary oversight system, providing enforceable definitions of addictive design features, mandating anti-addiction design practices and labeling, implementing economic measures like Pigouvian taxes, and fostering multi-stakeholder governance. It also emphasizes the need for cross-border coordination to address regulatory arbitrage. These policy directions aim to enhance regulatory efficacy and protect youth well-being in digital environments, contributing to ongoing international discussions on adolescent digital safety.

1. Introduction

The rapid integration of artificial intelligence (AI) into digital platforms has reshaped how adolescents engage, learn, and socialize online. This transformation is particularly visible in China, where 97.2% of minors now have internet access and begin using it at progressively earlier ages (). AI-driven platforms are central to youth culture and development (). In this study, the term “AI-driven platforms” refers to digital services whose core operations rely on algorithmic content curation, real-time behavioral prediction, and dynamic optimization powered by machine learning models (). These include social-media, video-streaming, and gaming services that learn from user behavior and recalibrate content delivery to sustain attention. Unlike traditional internet services, they operate as adaptive socio-technical systems that continuously adjust in response to user input. This creates feedback loops in which algorithms and users mutually influence one another, forming complex interaction patterns where small behavioral changes can generate large and often unintended outcomes ().
Extensive evidence links excessive digital use to heightened anxiety (; ), depression (; ; ), nomophobia (), sleep disturbances (; ), and academic decline (; ) among adolescents. This pattern of excessive engagement is often conceptualized as Problematic Internet Use (PIU), a maladaptive coping process marked by difficulties in self-regulation and impaired daily functioning (). Although PIU is not formally categorized as an addiction in the DSM-5 () or ICD-11 (), international bodies including the WHO and UNICEF increasingly recognize excessive digital use as a significant public health and developmental concern (; ). While early research on PIU primarily emphasized individual behavior, increasing attention has shifted to the structural and design-level factors that drive compulsive use, especially on AI-driven platforms (; ; ; ).
Within this environment, regulators and scholars have increasingly focused “addictive design features.” This term refers to interface or algorithmic elements deliberately engineered to sustain engagement through reinforcement-based feedback loops. Some authorities describe similar mechanisms as “addictive patterns” (), while related scholarship often examines them under the broader concept of “dark patterns” (; ; ; ). This study adopts the term “addictive design features” to emphasize design elements that exploit behavioral reinforcement principles to sustain compulsive use, distinguishing them from deception-based or privacy-manipulative dark patterns ().
Adolescents are particularly vulnerable to such features (). Their cognitive systems for impulse control, risk assessment, and long-term decision-making remain immature (; ), while their sensitivity to social rewards and feedback is heightened. Limited privacy awareness, unfamiliarity with virtual currencies, and a preference for immediate gratification further reduce their ability to resist addictive design features (). These developmental traits (; ) increase their vulnerabilities to negative impacts of addictive design features. Empirical studies confirm a significant association between specific addictive design features and PIU in adolescents (), highlighting the urgent need for regulatory intervention.
In response, international regulatory frameworks have begun emphasizing design accountability. The EU’s Digital Services Act (DSA) () requires Very Large Online Platforms (VLOPs) to conduct annual systemic risk assessments on their platforms, including features that could influence users’ mental health. Similarly, California’s Age-Appropriate Design Code Act (CAADCA) () prohibits dark patterns and interface features known to harm children’s physical or mental health. In contrast, China’s approach—while offering strong youth-specific protections—remains focused on content moderation, usage-time restrictions, and mandatory parental supervision, measures that do not directly regulate the design architectures. Crucially, China still lacks a unified legal definition of “addictive design features” and established risk-based standards for their identification and control. This gap constitutes a regulatory blind spot that this study seeks to address.
Accordingly, this study pursues three interconnected objectives: (1) to explain how addictive design features in AI-driven platforms exploit adolescents’ vulnerabilities through feedback mechanisms. (2) to diagnose why China’s existing regulatory mechanisms struggle to govern these structural risks and identify key legal gaps. (3) to propose a risk-tiered regulatory agenda, informed by international practice, that advances proactive protection for adolescents.
To achieve these aims, this study integrates behavioral science, systems theory, and legal analysis. It introduces the Adaptive Interaction Design Framework (AIDF), which models platform-user interactions as feedback systems and identifies specific intervention points for regulatory intervention.
This study contributes by addressing a critical gap in existing regulatory frameworks that often overlook the design elements of AI-driven platforms. Specifically, it clarifies how addictive design features operate within adaptive, closed-loop feedback environments that exacerbate adolescents’ vulnerabilities. The AIDF not only highlights these complex interactions but also offers a systematic approach for understanding the interplay between platform objectives and user behaviors. This integrated lens enables regulators to identify blind spots in current policies and encourages the development of proportionate, proactive interventions that can effectively mitigate the risks adolescents encounter in the digital environment.
The remainder of the paper is structured as follows. Section 2 outlines the research methods and interdisciplinary analytical approach. Section 3 presents the three-tiered typology of addictive design features and the AIDF model. Section 4 evaluates China’s regulatory framework through international comparison and proposes tiered governance recommendations. Section 5 concludes with broader implications for global youth protection in algorithmic environments.

2. Materials and Methods

2.1. Methodological Approach and Scope

This study adopts an interdisciplinary methodology that integrates legal-doctrinal analysis with conceptual frameworks from systems theory and behavioral science to develop a normative governance model. Legal analysis serves as the core method. Doctrinal interpretation examines Chinese statutes and regulations to identify how they allocate obligations, define protected interests, and structure enforcement. Comparative analysis places China’s framework within the context of international regulatory developments. The AIDF serves as an analytical model, conceptualizing platform–user interactions as feedback loops derived from control systems theory, which aids in identifying potential points for legal intervention to disrupt compulsive use patterns. To support the need for design-level regulation, this study relies on existing empirical evidence from behavioral science, specifically peer-reviewed studies that demonstrate adolescents’ cognitive vulnerabilities and reinforcement mechanisms.

2.2. Doctrinal and Comparative Analysis

The legal analysis covers China’s regulatory hierarchy, including national laws, administrative regulations, and ministerial rules. Primary sources include the revised Law on the Protection of Minors (), the Regulations on the Protection of Minors in Cyberspace (RPMC) (), the Provisions on Algorithmic Recommendations for Internet Information Services (PAARIIS) (), the Notice on Preventing Minors from Becoming Addicted to Online Games (), and the Interim Measures for Generative Artificial Intelligence Services (). All documents were sourced from official websites of the State Council of China, the Cyberspace Administration of China (CAC), and other relevant government agencies. This analysis used a doctrinal and textual approach to identify legal principles, normative gaps, and enforcement mechanisms. Rather than being descriptive, it is interpretive, focusing on how regulatory provisions address structural design features that sustain compulsive use.
To place China’s framework in a global context, a comparative analysis of selected jurisdictions (EU, UK, and US) was conducted. Though not exhaustive, this comparison selectively examines jurisdictions that have explicitly regulated addictive, manipulative, or dark-pattern designs in youth-protection rules and digital governance laws. The aim is to highlight diverse regulatory strategies that could inform policy development in China.

2.3. Conceptual Literature Synthesis

This study conducted a conceptual literature synthesis to develop the framework supporting the normative legal analysis. By integrating interdisciplinary scholarship, it establishes the foundational concepts for the paper’s normative argument. Relevant literature was identified through targeted searches in Web of Science, PubMed, Scopus, and CNKI using keywords “addictive patterns,” “addictive algorithms,” “addictive design,” “dark pattern,” “manipulative design,” “deceptive design,” “adolescent wellbeing,” “mental health,” “digital addiction,” “internet addiction,” and “problematic internet use.” Searches were performed on 1 July 2025 and covered publications from January 2014 to May 2025, prioritizing peer-reviewed articles, systematic reviews, and authoritative policy reports. Inclusion focused on conceptual relevance and theoretical contribution, emphasizing empirical or theoretical insights into feedback-driven design, adolescent vulnerabilities, or regulatory and ethical perspectives. Non-scholarly sources and purely technical papers without regulatory implications were excluded.
As the synthesis provides a conceptual foundation rather than new data, traditional systematic review protocols, such as PRISMA, were not applied. This approach aligns with interdisciplinary legal scholarship, where existing research informs normative reasoning, and provides empirical grounding for policy arguments instead of generating new empirical findings (). The resulting framework integrates theoretical and empirical insights to guide the subsequent legal analysis and normative critique.

2.4. Analytical Framework: The Adaptive Interaction Design Framework (AIDF)

To analyze the emergence of addictive design features and potential regulatory interventions, this study employs the AIDF as its analytical tool. Developed from control systems theory and recent interdisciplinary research (), this AIDF conceptualizes the interaction between digital platforms (“Controller”), interfaces (“Actuator/Sensor Module”), and users (“Process Module”) as a dynamic closed-loop system. This framework was selected for its ability to reveal the feedback loops driving compulsive use, identify specific points for legal intervention, and translate abstract regulatory goals into clear and actionable design requirements. In this study, the AIDF serves as a legal-analytical tool, structuring the normative analysis, clarifying the causal pathways through which addictive design features impact adolescents, and guiding the identification of regulatory interventions.

2.5. Limitations

This study has several methodological limitations consistent with its conceptual and normative scope. It relies primarily on publicly available legal texts and secondary academic sources, without original empirical research. The doctrinal and comparative analyses are interpretive in nature, meaning alternative interpretations of legal texts are possible. Furthermore, the conceptual literature synthesis is selective rather than exhaustive, prioritizing theoretical and normative relevance over comprehensive bibliometric coverage. Future empirical research could test or expand the framework’s assumptions in practical enforcement contexts.

3. Results

3.1. Definition, Harms, and a Typology of Addictive Design Features

3.1.1. Definition

Despite their growing importance in digital governance, the concept of “addictive design features” remains underdeveloped in legal and empirical scholarship. It is often discussed within broader debates on manipulative, persuasive, or deceptive interface designs. Notably, the concept of “dark patterns,” first introduced by British UX designer Harry Brignull, refers to user interface designs that intentionally mislead or pressure users into unintended actions. Subsequent frameworks, such as those by (), categorize dark patterns by methods like forced action, misdirection, and obstruction. More recent studies define them as interfaces that undermine user autonomy and decision-making to benefit business interests, often by exploiting cognitive biases ().
Addictive design features constitute a specific subset of dark patterns characterized by feedback loops that reinforce continuous engagement. Drawing on the () and related studies, they can be described as interface and algorithmic features designed to exploit neurocognitive vulnerabilities, especially those related to reward anticipation and attentional control, thereby inducing repeated use and dependency-like behaviors. Scholars further identify them as attention-capturing dark patterns that reduce disengagement cues, commonly through mechanisms such as infinite scroll, autoplay functions, and personalized recommendation feeds (). These features function within dynamic socio-technical systems, interacting with algorithms, user routines, and content delivery mechanisms to sustain usage cycles.
In China, however, this concept lacks a legal definition. Regulatory texts such as the RPMC and the PAARIIS use the vague term “inducing addiction,” which highlights outcomes like overuse or dependency but offers no clear standard for what constitutes “inducement” or “addiction.” This definitional vagueness undermines regulatory enforcement and leaves platforms without precise guidance on acceptable design practices. Given this situation, this study adopts an analytical definition of addictive design features as design choices that systematically exploit users’ cognitive and psychological vulnerabilities to prolong engagement in ways that compromise wellbeing. This definition establishes a consistent conceptual foundation for comparative and regulatory analysis and supports proposals for a clearer, legally enforceable framework aimed specifically at adolescent protection.

3.1.2. Harms of Addictive Design Features

Research on the psychological and physical harms of excessive screen time and PIU is extensive. However, few studies isolate the effects of specific addictive design features as independent variables (). Most research addresses broader phenomena such as digital addiction or social media dependency, but still provides important inferential evidence on the public health implications of such features. Drawing from this literature, four categories of harm are identified.
Neurological Harm: fMRI studies () show that exposure to personalized algorithmic content, like TikTok’s recommendations, heightens activation in key regions of the default mode network, including the medial prefrontal cortex, posterior cingulate cortex, and ventral tegmental area. Increased connectivity to visual, auditory, and frontoparietal networks accompanies this neural activity. Research () in young adults with algorithm-driven compulsive use indicates heightened regional homogeneity in the dorsolateral prefrontal cortex and related areas, suggesting over-synchronized activity linked to reward immersion and cognitive entrenchment. These neural patterns may foster addiction-like behaviors through increased self-referential processing and reward signaling. A scoping review () on children and adolescents reports consistent structural brain changes, including reduced gray and white matter volume, cortical thickness, and impaired functional connectivity, especially in the prefrontal cortex. These findings imply that chronic exposure to addictive design features may alter brain architecture and neural communication pathways, particularly those related to reward processing and executive control.
Psychological Harm: The psychological consequences of addictive digital engagement manifest across emotional, cognitive, and behavioral domains. Emotional harms include increased rates of depression, anxiety, loneliness, and mood instability (; ). Cognitive impairments encompass attention deficits (), reduced impulse control (), compromised executive function (), and difficulty with planning and decision-making. Behaviorally, users experience loss of control over their usage, guilt about time spent online, withdrawal-like symptoms when attempting to reduce use, and neglect of offline responsibilities (). These psychological impairments reduce individual agency and resilience, exacerbating developmental vulnerabilities and threatening the mental health of future generations.
Risk Exposure: Extended digital engagement increases exposure to secondary risks such as cyberbullying, online grooming, scams, and inappropriate content (; ). Youth with psychological issues such as impaired judgment and reduced impulse control are particularly vulnerable to online manipulation and risky decision-making (). These risks compromise personal safety and undermine the protective environments necessary for healthy youth development.
Comorbidities with Physical Health Harm: Addictive digital engagement negatively impacts physical health, leading to sleep disruption, weight changes, and academic challenges (; ). These effects may be mediated by both direct mechanisms, such as blue light exposure affecting circadian rhythms, and indirect pathways, including reduced physical activities and less face-to-face interactions (). Cumulatively, these health impacts can affect educational attainment, physical development, and long-term societal productivity.
AI-driven addictive design features use advanced algorithmic mechanisms and machine learning to develop hyper-personalized manipulation strategies that operate below the user’s awareness. These strategies exploit cognitive vulnerabilities, weaken autonomy, and create a power imbalance between technology and human agency. While direct causality remains unconfirmed, evidence links these features to PIU, which is strongly associated with diverse harms (). Adolescents are particularly vulnerable due to ongoing development and limited self-regulation. From a public health perspective, a precautionary approach to regulating addictive design features is both reasonable and necessary to foster a sustainable and resilient digital ecosystem for youth.

3.1.3. A Three-Tiered Typology of Addictive Design Features

Effective governance of adolescent digital wellbeing requires understanding not only the harms but also the structural logic of how addictive design features operate. Building on typologies proposed by () and (), this study adopts a three-tiered framework that classifies design features according to their strategic intent, psychological mechanism, and specific interface techniques. This hierarchical structure distinguishes design features by abstraction level and operational specificity (see Table 1), conceptualizing digital platforms as layered socio-technical systems. The framework provides a shared vocabulary for proportionate governance capable of addressing AI-driven personalization and feedback loops.
Table 1. Typology of Addictive Design Features.
At the high level, the typology identifies four strategic intentions commonly used by digital platforms to shape user behavior. Interface Interference manipulates user perception, comprehension, or navigation through visual emphasis, information overload, or deceptive choice architectures. Social Engineering employs principles of social psychology and behavioral economics to influence users through emotional cues, social norms, and cognitive biases. Persistence encourages continued engagement by exploiting users’ sensitivity to unfinished tasks and prior investment. Forced Action coerces users into specific behaviors or timed interactions to maintain functionality, thereby restricting autonomy and reinforcing compulsive use.
Building on these strategies, the mid-level focuses on the psychological mechanisms that target cognitive and emotional traits (; ; ; ; ), particularly adolescents’ limited impulse control and heightened need for social validation. These mechanisms function as operational bridges, translating abstract strategic goals into specific psychological levers that create self-reinforcing engagement patterns. At the low-level, these psychological mechanisms materialize as concrete application-specific interface features that users directly interact with (; ; ; ; ). These features are key intervention points for regulation because they are empirically observable and technically modifiable.
The framework conceptualizes addictive design as a layered system linking abstract manipulation goals to concrete interface practices. Strategic intentions define why engagement is pursued; psychological mechanisms explain how cognitive vulnerabilities are exploited; and interface features describe what users experience. This distinction clarifies that not all addictive design features pose equal risks or demand identical regulatory responses, supporting proportionate and adaptive governance. The typology also provides a conceptual bridge to the AIDF introduced in Section 3.1.4, which maps legal obligations to specific intervention points across these layers. As digital environments evolve with generative AI, predictive analytics, and reinforcement learning, the typology remains a dynamic framework open to continuous refinement.

3.1.4. Conceptualizing Addictive Design Features Within AIDF

This study enhances the three-tiered typology by applying the AIDF to illustrate how addictive design features function as closed-loop feedback systems connecting platforms, interfaces, and users. Derived from control systems theory and its specific application in analyzing AI-driven interfaces (), the AIDF conceptualizes digital environments as adaptive socio-technical systems that optimize user engagement through continuous feedback.
  • Core Components of the AIDF
In traditional control systems, key components interact through feedback loops to manage behavior. The controller processes input signals and generates outputs to achieve system goals. The process module represents the influenced system, producing measurable outputs. The actuator transforms the controller’s decisions into actions, while the sensor captures resulting data for real-time adjustment (). In the AIDF, these components correspond directly to the digital platform ecosystem (Figure 1): the platform functions as the controller, the user serves as the process module, and the interface acts as both actuator and sensor, transforming algorithmic outputs into interactive formats and capturing user interactions data.
Figure 1. AIDF for Understanding Addictive Design Features.
  • Operation of the Feedback Loop
The system operates through continuous interaction cycles. When an adolescent user interacts with content, the interface collects behavioral signals such as completion rates and gestures. These signals are processed by platform algorithms that recalibrate outputs, selecting new content or modifying visual cues to sustain attention. Addictive design features exploit positive feedback loops, where engagement signals amplify subsequent outputs, creating self-reinforcing behavioral cycles.
On AI-driven platforms, this process is optimized through large-scale behavioral datasets. The platform’s effectiveness in presenting content and gathering behavioral data relies on various design features, each with specific affordances. These affordances refer to the relational properties between users and design features, encompassing perceived and actual capabilities () that determine how users can interact with a given feature at a specific time and context (). Affordances operate across cognitive, physical, functional, and sensory dimensions () to sustain user interaction and preserve loop continuity.
For analytical clarity, the AIDF formalizes these dynamics using control systems notation (see Table 2). User behavioral signals are denoted as I(t), platform outputs as O(t), and the platform’s processing function as fP, which maps inputs to outputs. The interface’s presentation function fO transforms platform outputs into interactive formats, while its collection function fI captures user responses. User condition C(user) reflects the psychological and developmental factors influencing responses. Design features d and their affordances A(d,t) specify how interface elements enable or constrain user actions at particular times. At each time step t, the platform processes I(t) through fP to generate O(t + 1), which the interface presents through fO, prompting user responses that become I(t + 1), thereby closing the feedback loop.
Table 2. AIDF Variables with a Short-Form Video Feed Example.
A typical Douyin (Chinese version of TikTok) interaction exemplifies this process. When a user swipes through short videos, the platform captures behavioral signals I(t)—including watch duration and replay behavior. The recommendation algorithm fP analyzes these signals to figure out engagement patterns and identify content features that maximize retention. Within milliseconds, the algorithm generates O(t + 1) by selecting the next video predicted to sustain attention. The interface presents this output through fO via immersive autoplay, while design affordances A(d,t) such as infinite scroll remove natural stopping cues. As users interact, the interface’s sensor function fI captures new data, feeding it back into fp to refine recommendations. At each time step t, the system updates based on accumulated behavioral data, generating an adaptive feedback process. This closed loop operates continuously, with each interaction cycle training the algorithm to adapt more precisely to the user’s C(user) characteristics.
The three-tiered typology of addictive design features integrates into this model across strategic, psychological, and operational levels. High-level strategies define the platform’s objectives; mid-level mechanisms specify how vulnerabilities are leveraged; and low-level design patterns represent concrete affordances A(d,t) where regulation can directly intervene. From a control systems perspective, effective regulation must introduce negative feedback to counteract reinforcing loops. This involves constraining the platform’s optimization objectives (redefining fP to incorporate wellbeing metrics), redesigning interface affordances (modifying d and A(d,t) to introduce friction), and addressing user vulnerabilities through targeted interventions on C(user). Regulations that alter interface appearance without changing algorithmic objectives remain ineffective, since platforms can comply formally while maintaining engagement-maximizing logic.
  • Illustrating AIDF: The “Minors’ Mode” Regulatory Failure in China
China’s “Minors’ Mode” illustrates the pitfalls of interface-only regulation. Implemented across major platforms like Douyin, Kuaishou, and Bilibili in China, Minors’ Mode mandates age verification, daily time limits, content filtering, and disabled social features. In AIDF terms, it is an interface-level intervention that alters design affordances A(d,t)—for example, filtering content O(t), and disabling engagement features—while leaving the Controller’s optimization objective (maximizing engagement via fP) unchanged.
A study of 21 video applications with “Minors’ Mode” protections found that over half disabled search functions, yet those retaining it still produced sexually suggestive results for queries such as “beauty” and “sexy.” Algorithmic recommendations even surfaced soft pornography within “educational” categories (). This illustrates that even with constrained interfaces, platforms continue to collect behavioral data via the Sensor, tracking which videos adolescents watch longer and which keywords they search, and feeding these signals back into fP for refine delivery. The reinforcing feedback loop persists because adolescent engagement signals are reinterpreted as indicators of interest, prompting further optimization toward attention-grabbing content.
Minors’ Mode restricts the Actuator without addressing how platforms re-optimize fP or exploit fi’s ongoing data collection. Effective interface regulations must anticipate circumvention by mandating algorithmic transparency to make fP’s objectives auditable, limiting fi’s behavioral data collection to reduce hyper-personalized targeting, and requiring design affordances A(d,t) that introduce structural friction against adaptive re-optimization. The Minors’ Mode case thus illustrates the importance of interface governance based on closed-loop system dynamics to prevent platforms from circumventing regulations.
  • Comparative Positioning of the AIDF
While various disciplines address digital platform harms, the AIDF offers a distinct regulatory lens. Unlike models grounded in behavioral economics or addiction neuroscience, the AIDF conceptualizes manipulation as an emergent, self-reinforcing process linking platform control functions, interface affordances, and behavioral feedback.
Behavioral economics explains digital persuasion through choice architecture () and has influenced regulations like GDPR’s “privacy by default” () and opt-in consent requirements. However, it treats choice architecture as a static design, whereas the AIDF recognizes that AI-driven platforms dynamically adapt this architecture in response to user feedback. This system identifies individual vulnerabilities and optimizes exploitation strategies in real-time. Initial nudges generate behavioral responses interprets as preference signals, reinforcing engagement in subsequent iterations. Accordingly, effective regulation should target the structure of feedback loops rather than isolated decision points.
Addiction neuroscience links compulsive digital use to dopamine-based reward reinforcement (). While this view explains adolescent neurological vulnerability, it focuses on individual predispositions. In contrast, the AIDF situates addiction within the platform-user interaction, showing how system dynamics amplify vulnerabilities. This distinction between vulnerability and exploitation carries significant regulatory implications. Individual-focused interventions, such as education and self-control training, address symptoms, while AIDF-informed regulation targets the causal mechanisms that generate addictive engagement.
By linking strategic objectives, psychological mechanisms, and design affordances, the AIDF clarifies how platforms optimize engagement and where interventions should occur. It enables policymakers to trace harms through the feedback loop—from behavioral outcomes back to optimization functions—supporting anticipatory regulation that constrains both interface design and algorithmic learning processes responsible for emerging exploitative patterns. Having established the conceptual foundations and structural logic of addictive design features, the next step is to examine how China has responded to these risks through its evolving regulatory framework.

3.2. Chinese Regulatory Framework

3.2.1. Legal Foundations in Adolescent Protection: The Starting Point of Addictive Design Features Regulation

China’s regulatory response to addictive design features originated from growing concerns over adolescent PIU, particularly in online gaming. Early governmental efforts emphasized behavioral controls to mitigate excessive screen time. In 2007, the Ministry of Culture and Tourism and the State Administration for Industry and Commerce issued the Notice on Further Strengthening the Management of Internet Cafes and Online Games, introducing mandatory anti-addiction systems that restricted online game rules and promoted technical solutions to mitigate youth gaming addiction.
By 2013, the Ministry of Culture and Tourism and the CAC jointly released the Comprehensive Prevention and Treatment Program for Adolescent Online Game Addiction, which expanded oversight by requiring age-appropriate reminders, parental supervision, and supportive in-game environments. A further milestone came in 2021 when the National Press and Publication Administration issued the Notice on Further Strict Management and Effective Prevention of Minors Addiction to Online Games. This policy imposed strict hourly limits, mandatory real-name registration and login controls, marking the transition from voluntary management to state-enforced prevention.
Over time, regulatory focus shifted from controlling usage duration to addressing the design features embedded in platform design. The 2020 revision of the Law of China on Protection of Minors was a turning point. Article 74 of this law prohibits online service providers from offering products or services likely to induce addiction in minors. Although it does not explicitly mention algorithms or interface design, its broad language implicitly covers design features that promote compulsive interaction. This article established the legal foundation for regulating design-level harms and enabled subsequent normative expansion.
Building on this base, the 2024 RPMC introduced detailed and enforceable standards. Chapter V (Articles 39 to 49) establishes a comprehensive framework to prevent digital overuse among minors. Article 42 requires providers to develop robust anti-addiction systems, promptly revise addictive features, and disclose prevention measures annually. Article 43 mandates age-specific “minors’ mode” for online games, live streaming, audio-video, and social media platforms. These modes must adhere to national standards for time, duration, functions, and content, ensuring practicality and effectiveness, while offering guardians convenient tools for managing minors’ time, permissions, and spending. Article 44 introduces age-based spending caps and prohibits paid services beyond minors’ civil capacity. To counter “traffic-first” incentives, Article 45 prohibits creating or promoting online communities centered on fan funding, ranking votes, or engagement manipulation, and obliges providers to prevent and stop their users from inducing minors into such behaviors. Furthermore, Article 46 requires real-name verification via a national identification system and bans account rentals or sales to minors. Article 47 directs game providers to refine rules and avoid content or features harmful to minors’ physical or mental health. They must implement an age-rating system, categorizing games by type, content, and function, and clearly display suitable age group information at key access points like download, registration, and login screens. Beyond platform obligations, the RPMC also extends responsibilities to families, schools, and state authorities to ensure coordinated protection of minors. Articles 40 and 41 require schools to identify early signs of digital addiction and inform guardians, while parents must supervise digital habits. Article 48 defines responsibilities for education, public health, and cyberspace agencies in awareness, enforcement, and research on digital overuse and psychological disorders.
These measures represent a shift from reactive approaches to proactive design-level accountability. However, the absence of an operational legal definition of “addiction-inducing” features continues to undermine enforceability (), exposing a fundamental normative gap in the regulatory system.

3.2.2. Regulatory Measures on Algorithmic and Platform Governance

As digital platforms became more complex and engaging, regulation expanded beyond youth protection to address algorithmic and platform architecture. A series of administrative rules and policy guidelines aim to mitigate risks associated with addictive design features, particularly in recommender systems and automated decision-making processes.
A key component of this governance framework is the PAARIIS issued by the CAC. Article 8 requires providers to regularly audit their algorithmic models, data inputs, and outcomes, and explicitly bans algorithms that induce addiction or excessive reliance. Article 18 mandates heightened responsibilities regarding minors, requiring platforms to establish internal oversight, provide age-appropriate modes, and avoid recommending content that promotes unsafe imitation, unhealthy habits, or compulsive usage patterns detrimental to adolescents’ wellbeing. To improve transparency, Article 16 obliges platforms to indicate when algorithmic recommendations are used and to disclose their core principles, intended objectives, and operational logic in accessible formats. Additionally, the regulation encourages technical interventions such as content de-duplication, disruptive prompts, and improved interpretability of algorithmic systems governing search, sorting, presentation, and push functionalities. These measures aim to limit excessive personalization and overstimulation, factors linked to digital echo chambers and compulsive usage.
In July 2023, seven government departments including the CAC issued the Interim Measures for the Management of Generative Artificial Intelligence Services. Article 10 places a heightened duty of care on generative AI service providers concerning minors, requiring them to take “effective measures” to prevent overuse or dependency. Although the scope of such “effective measures” is not specified, it can be interpreted in light of existing youth protection initiatives, encompassing access time, identity verification, risk warnings, and feedback or complaint mechanisms.
Beyond formal rules, China frequently employs targeted enforcement through administrative “special campaigns.” The 2024 “Qinglang” campaign on algorithmic misconduct () prioritized rectifying addictive recommendation practices. These campaigns, led by a primary regulator in coordination with multiple departments, temporarily concentrate enforcement resources for rapid inspections and sanctions (). They create strong compliance pressure and short-term deterrence but have limited long-term impact.
These developments demonstrate rising regulatory awareness of how algorithmic infrastructures reinforce compulsive digital behaviors and threaten adolescent wellbeing. Yet many provisions rely on broad ethical language, enforcement lacks consistency, and precise technical criteria for defining “addiction-inducing” patterns are still absent. Without clearer standards and stable institutional oversight, the effectiveness of these regulations in addressing the root causes of adolescent digital overexposure remains limited. Building on this regulatory baseline, the next section situates China’s approach within a global context to highlight convergent trends, divergent priorities, and lessons that may inform future policymaking.

3.3. Comparative Overview: International Regulatory Approaches to Addictive Design Features

Across jurisdictions, governments have adopted varied legal and policy strategies to address risks from addictive design features in digital environments, particularly concerning children and adolescents. Despite differences in scope and enforcement, these regimes share a focus on restricting behavioral manipulation embedded in platform architecture and algorithms.
In the European Union, the 2022 DSA requires VLOPs to assess and mitigate systemic risks, including those from addictive user engagement. Recital 83 highlights risks from online interface designs that promote behavioral addictions, while Recital 70 mandates the transparent presentation of the parameters of recommender systems. Article 34 further requires platforms to evaluate impacts on minors’ rights, mental health, and public wellbeing. The DSA explicitly prohibits “dark patterns,” which Recital 67 defines as practices that distort or impair users’ autonomous choices. Recital 81 specifically highlights interfaces that intentionally or unintentionally exploit minors’ vulnerabilities or encourage addictive behavior. Although the term “addictive design features” is not explicitly used, phrases like “hyper-engaging dark patterns,” meaning designs that use big data analytics and behavioral profiling to induce greater user engagement, is widely recognized as manipulative practices prohibited under the DSA and other EU laws such as the Unfair Commercial Practices Directive (UCPD) (). The European Parliament has called for a “right not to be disturbed,” suggesting that engagement-maximizing features such as infinite scroll, autoplay, and push notifications should be deactivated by default for minors to protect their time and attention (). In 2024, the European Commission launched an investigation into TikTok Lite’s “Task and Reward Programme,” citing systemic mental-health risks to minors. Following precautionary measures by the Commission, TikTok voluntarily suspended the feature across the EU. The case demonstrates the DSA’s capacity for rapid intervention, but it also revealed compliance gaps, since the feature was launched prior to the legally required risk assessment. The reliance on voluntary suspension rather than a final formal sanction indicates that enforcement remains reactive and dependent on platform cooperation, and that greater technical expertise and resources are needed to secure strict proactive compliance.
The UK’s Online Safety Act () focuses on regulating online services to protect users, especially children, from harmful content. It empowers the Office of Communications (Ofcom) to enforce online safety codes and mandates risk assessments and safety measures for child-accessible services. However, the Act does not explicitly address addictive design. This gap has been partially filled by the Age-Appropriate Design Code (), which sets fifteen standards for digital services used by children. The code urges providers to limit persuasive design, minimize data collection, and avoid nudges extending screen time. Since the Online Safety Act came into force, Ofcom has initiated enforcement actions for non-compliance, including penalizing failures to respond to information requests, monitoring geoblocking to evade regulation, and promoting perceptual hash matching to detect child sexual abuse material. Although no direct sanctions for addictive design have yet occurred, the framework is tightening control over interface harms through mandatory risk assessments, safety-by-design duties, and accountability for services targeting UK users. However, Ofcom’s delayed publication of the updated Protection of Children Codes of Practice in April 2025 created a temporary regulatory gap. Overlapping mandates between the ICO and Ofcom further risk fragmented oversight across data protection and interface design domains.
In the United States, federal regulation of addictive design features remains limited, but several states have advanced child-focused initiatives inspired by UK models. California’s 2022 CAADCA requires platforms accessible to minors to conduct data protection impact assessments and set developmentally appropriate defaults. It also limits dark patterns and prohibits algorithmic profiling of children. However, the law has faced legal challenges, with a federal district court issuing a preliminary injunction in 2023, though the Ninth Circuit partially upheld it in 2024. The case remains pending on appeal. California also introduced Senate Bill 287 (2023) (), seeking to ban designs, algorithms, or features known that substantially increase risks of addiction, self-harm, eating disorders, or suicidal behaviors among children. Despite significant public attention, it failed to pass, reflecting the legal and political complexities inherent in U.S. platform regulation. New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act () represents a major state-level step. It prohibits social media platforms from providing addictive feeds to minors without parental consent to curb online addiction. At the federal level, the Social Media Addiction Reduction Technology (SMART) Act () proposed banning infinite scroll and autoplay while setting default usage limits. However, the bill failed to gain traction and was not reintroduced in subsequent sessions. By contrast with UK and EU approaches that emphasize ex ante design obligations and prompt administrative remedies, U.S. regulatory efforts have been most effective where they rest on mature, technology-neutral doctrines of deception and unfairness established through litigation. However, these efforts run into problems when broad, child-focused design mandates trigger First Amendment challenges, vagueness concerns, or federal preemption risks, all of which delay implementation and limit regulatory reach. In particular, constitutional constraints sharply restrict regulatory scope in the United States, where First Amendment protections that treat algorithmic curation as a form of protected speech, together with Section 230 immunity, have repeatedly hindered or postponed state-level design regulations through industry-led litigation.
These global efforts reflect a growing recognition of addictive design features as systemic regulatory concern with significant implications for child development. While national approaches vary, a clear convergence has emerged toward integrating child rights, digital ethics, and public health perspectives into governance frameworks. However, enforcement experiences consistently reveal challenges that constrain regulatory effectiveness: vague definitions enable platforms to circumvent compliance; institutional fragmentation weakens coordination; constitutional and jurisdictional limits restrict intervention; and significant delays between rulemaking and implementation. Across jurisdictions, these challenges highlight the need for a more proactive and harmonized regulatory strategy to safeguard young users from the evolving risks of addictive design.

4. Discussion

Having examined the conceptual foundations, structural mechanisms, and global regulatory approaches to addictive design features, the discussion now turns to the analytical implications of these findings. This section explores the underlying challenges within China’s current governance framework, situates them within broader international patterns, and outlines pathways for improving regulatory effectiveness.

4.1. Challenges of China’s Regulatory Frameworks on Addictive Design Features

4.1.1. Conceptual Ambiguity and Normative Gaps

Despite China’s increasing focus on regulating addictive design features, the absence of definitional precision continues to undermine enforceability. Article 74 of the Law on Protection of Minors prohibits providing products or services “likely to induce addiction in minors”, but it offers no operational criteria for determining what constitutes an addiction-inducing design. Similarly, Article 18 of PAARIIS prohibits algorithms that “induce user addiction or excessive reliance” yet fails to specify what algorithmic structures or behavioral effects meet that threshold. This vagueness hampers implementation, as regulators have not been able to clearly distinguish between persuasive designs () that support user welfare (e.g., transparent reminders or learning achievement badges) and manipulative designs that exploit cognitive vulnerabilities through opacity, variable rewards, or personalized loss-aversion triggers. Without this differentiation, regulations risk adopting overbroad, one-size-fits-all standards () that discourage legitimate innovation while failing to address real harms.
This definitional ambiguity reflects a broader regulatory tendency to prioritize observable behavioral outcomes (screen time, usage frequency) over the underlying design mechanisms that generate them. Current frameworks overlook the layered structure of addictive design features. For example, infinite scroll (a low-level pattern) operates alongside variable reward schedules (a mid-level psychological mechanism) that are strategically employed to maximize retention (a high-level goal). These features exploit adolescents’ developmental vulnerabilities, conditioning compulsive habits similar to substance dependence (; ). To be effective, regulation must move beyond static feature control toward dynamic system intervention, addressing the feedback loops that perpetuate engagement. Legal norms should function as negative feedback mechanisms, introducing friction points or mandating algorithmic transparency to counteract self-reinforcing optimization.
Future legislation should therefore target the structural properties and systemic dynamics of addictive design architectures. This requires embedding a structural and process-oriented understanding of addiction into regulatory discourse, with precise definitions to be developed later through statutes or policy measures.

4.1.2. Structural Weaknesses in Enforcement and Institutional Coordination

Even when regulatory intent is clear, China’s governance of addictive design features is hindered by fragmented authority and limited institutional capacity. Regulatory authority is divided among the CAC, the National Press and Publication Administration, the Ministry of Culture and Tourism, and provincial cyberspace offices, each holding overlapping but decentralized responsibilities. This fragmentation hinders sustained, cross-sectoral efforts to address design-based harms that affect different content categories and application types. Equally significant is the exclusion of public health authorities from this framework, despite growing recognition that addictive design features constitute a public health risk. This omission limits the state’s ability to translate psychological and neuroscientific insights into concrete regulatory standards.
Enforcement mechanisms are weakened by vague compliance standards and an over-reliance on platform self-regulation. Article 8 of PAARIIS mandates platforms to “regularly audit, evaluate, and verify” algorithms, but it specifies neither audit frequency nor evaluation criteria, leaving compliance standards intentionally vague. Although the regulation prohibits algorithms that induce addiction, it relies entirely on platforms’ internal assessments to determine compliance, thereby transforming a public duty into a private discretion. Consequently, the compliance ecosystem favors superficial interventions, such as setting time limits or issuing parental controls, while deeper design structures and algorithmic logic remain largely unexamined.
Enforcement efforts also lack continuous and institutionalized mechanisms. Although campaign-style inspections create temporary regulatory pressure, their episodic nature allows platforms to implement short-term adjustments and revert to previous engagement strategies once oversight diminishes (). This cyclical pattern reveals the system’s vulnerability and undermines long-term reforms to addictive design features. A more stable and proactive enforcement framework is necessary to ensure that regulatory attention goes beyond superficial compliance and addresses the structural sources of harm.

4.1.3. Regulatory-Industry Asymmetries and the Knowledge Gap

A significant barrier to effective regulation arises from the asymmetry of technical expertise and information between digital platforms and government agencies (). As algorithmic systems become more complex, regulators struggle to fully understand the practical functioning of recommendation engines, personalization algorithms, and real-time engagement loops. Although Article 16 of PAARIIS requires platforms to “publicly disclose algorithmic principles, objectives, and operational logic in accessible formats,” enforcement relies entirely on self-reported disclosures without independent verification. This allows platforms to provide broad, technically compliant statements while concealing the specific behavioral levers used to exploit psychological vulnerabilities.
This knowledge gap is particularly evident in the lack of interdisciplinary expertise within enforcement agencies, whose staff typically has training in law or public administration but lacks skills in data science or behavioral psychology. Consequently, regulatory reviews often concentrate on visible indicators, such as screen time or formal policy documentation, inadvertently overlooking more subtle forms of cognitive manipulation embedded in the design of interfaces or algorithms. Platforms possess deep insights into user behavior and monetization strategies, allowing them to refine engagement features while appearing compliant, often making only cosmetic changes that do not address core addictive mechanisms.
To establish a robust and sustainable regulatory framework, there must be long-term investments in interdisciplinary talent, institutional reforms for design-level evaluations, and the establishment of independent auditing systems. Without these measures, governance will remain reliant on industry disclosures and sporadic campaigns, lacking the consistent oversight needed to confront systemic risks. In light of these domestic limitations, examining international regulatory experience offers valuable insights into alternative institutional logics and possible pathways for improvement.

4.2. International Experience: Convergence, Divergence, and Complementarity

Global regulatory responses to addictive design features have diversified but increasingly converge on the recognition that such designs pose public health and developmental risks. This international experience showcases distinct institutional logics and legal architectures that shape diverse governance styles. To illustrate these differences and similarities in regulatory approaches, the following table (Table 3) provides a comparative overview of key regulatory frameworks across various jurisdictions.
Table 3. Comparative Overview of Regulatory Approaches to Addictive Design Features.

4.2.1. Convergence

Across jurisdictions, a shared understanding has emerged that addictive design features should not be viewed merely as neutral business strategies but as regulatory issues with significant public health and developmental implications ().
The EU’s DSA, alongside the 2023 Parliamentary Resolution on addictive design, signifies a shift from content-focused regulation to interface-level accountability. Features like infinite scrolling, autoplay, and excessive notifications are increasingly viewed as forms of behavioral manipulation that impair user agency. Similarly, China’s legal reforms explicitly prohibit algorithmic models that foster addiction or overconsumption, indicating a move toward upstream risk governance aligned with developmental health priorities. Although federal action is limited in the U.S., state laws, such as New York’s SAFE for Kids Act, suggest growing alignment with international calls for upstream design regulation. These developments collectively highlight a broader trend toward anticipatory governance focused on the unique health and developmental rights of youth.

4.2.2. Divergence

Despite overlapping on normative goals, significant differences remain in regulatory structure and enforceability.
The EU model is characterized by a rights-based and preventive orientation. The DSA imposes proactive obligations on VLOPs, mandating independent algorithmic audits, systemic risk assessments for algorithmic impacts, and user controls over personalized recommendations to ensure transparency and autonomy. It prohibits interface designs that exploit cognitive biases and undermines autonomy, backed by robust enforcement mechanisms that include fines of up to 6% of global annual revenue. The AI Act (Regulation (EU) 2024/1689) () adds transparency obligations for high-risk AI systems, though its focus on intentional manipulation narrows its applicability to addictive design features. The 2023 EU Parliamentary Resolution calls for precise guidelines to restrict such features, especially for children. This comprehensive approach prioritizes preventive regulation and user protection, thereby positioning the EU as a proactive model for governing addictive platform architectures. The EU’s success in establishing this framework reflects its political culture emphasizing individual rights, precautionary risk governance rooted in decades of environmental and consumer protection regulation, and institutional capacity through well-resourced agencies.
In contrast, the U.S. approach adopts a “safe harbor” model that favors market-driven self-regulation, constrained by the First Amendment and Section 230 of the Communications Decency Act. These protections shield platforms from liability for user-generated content, complicating rules surrounding addictive design features. The First Amendment treats algorithms as protected speech, as affirmed in cases like Bernstein v. United States (), thus hindering both federal and state laws that do not meet strict constitutional scrutiny. Section 230 further constrains accountability for algorithms, as demonstrated in the Social Media Addiction case (), where claims concerning algorithmic dopamine manipulation were dismissed if linked to content moderation. The result is a legal environment that privileges innovation and market freedom but perpetuates enforcement gaps, relying on reputational incentives and self-regulation to induce compliance. The struggle to regulate addictive design features in the U.S. arises from a libertarian political culture that prioritizes commercial speech and innovation freedom, a fragmented federal structure that exposes state initiatives to preemption challenges, and powerful industry lobbying that has reframed design regulation as censorship. In certain contexts, reliance on market competition and reputational accountability has occasionally motivated platforms to adopt voluntary safeguards, suggesting that self-regulation may complement formal governance when coupled with transparency and consumer scrutiny.
China’s approach combines centralized oversight with normative guidance, as seen in the Law of China on Protection of Minors and the RPMC, both mandating comprehensive anti-addiction systems for minors. The PAARIIS calls for algorithmic transparency, requiring platforms to disclosure of principles and data sources, adhere to ethical standards, and prevent excessive engagement, especially among minors. However, vague terms like “inducing addiction” or “violating social morality” create significant ambiguity in identifying addictive design features. Furthermore, China’s enforcement, led by the CAC, largely relies on top-down actions, with comparatively weak penalties (e.g., fines up to 100,000 RMB yuan), which diminishes accountability. While user complaint mechanisms and periodic reviews are mandated, the absence of independent auditing and third-party involvement hinders procedural clarity and enforcement’s depth. China’s governance style reflects its political tradition of centralized authority, facilitating rapid policy implementation while prioritizing collective welfare and social order. Yet the effectiveness of enforcement depends heavily on bureaucratic capacity and local coordination, which vary across provinces. Regulatory actions are often policy-driven rather than evidence-based, and modest penalties fail to offset platforms’ profit incentives.

4.2.3. Complementarity and Lessons for China

Each regulatory model, despite its limitations, offers distinct strengths that can inform others. A global framework for governing addictive design features should prioritize complementarity over uniform convergence and be grounded in contextual adaptation. Understanding why certain approaches succeed requires a critical analysis of the institutional foundations and the political and cultural contexts that enable their effectiveness.
The EU’s strength lies in translating broad rights principles into specific, enforceable obligations. Rooted in its post-World War II commitment to protecting individual dignity and democratic values, this approach justifies state intervention against private manipulation to preserve personal autonomy. The DSA’s requirement for systemic risk assessments with explicit criteria provides actionable benchmarks that reduce interpretive ambiguity. Mandatory independent audits introduce external accountability that internal compliance mechanisms cannot achieve. High penalty thresholds create financial deterrence proportionate to platform scale. However, the EU model’s success is partially attributable to its political culture, which embraces precautionary intervention, as well as to its well-funded, technically sophisticated enforcement agencies. For China, the lesson is not to replicate EU institutional structures wholesale but to adopt specific mechanisms—precise definitional standards, independent technical auditing, and proportionate penalties—that address identified gaps in its current framework. With sufficiently specific regulatory standards, China’s centralized administrative capacity could enable faster enforcement.
The United States’ market-oriented approach, grounded in competition and reputational accountability, illustrates how self-regulation and user trust can complement formal oversight. In this model, platforms adjust design choices to preserve user trust and adopt industry codes of conduct, creating pathways for China and the EU to maintain user confidence and industry involvement while balancing innovation with public welfare. Drawing on the U.S. experience, China need not adopt self-regulation as its primary strategy, but it could incorporate industry technical expertise into standards-setting processes to enhance regulatory precision and reduce implementation friction.
China’s governance model is characterized by state-directed intervention and sector-wide mandatory measures, demonstrating a strong capacity for rapid compliance mobilization. Grounded in collective welfare, social stability, and developmental governance, this model enables rapid policy implementation when political will aligns with public interest. The large-scale introduction of screen-time limits and real-name verification systems demonstrates its administrative efficiency. To further enhance legitimacy and effectiveness, China should incorporate clear procedural protections, adopt transparent evaluation standards, and deepen cross-sector collaboration to ensure enforcement is both technically precise and broadly supported.
Ultimately, adaptive convergence, pursuing shared principles through contextually appropriate mechanisms, could significantly benefit the global regulatory landscape. For China, the challenge lies not in replicating foreign models wholesale, but in thoughtfully translating global norms into enforceable frameworks that align with domestic institutional capacities, political culture, and societal expectations. Such context-sensitive alignment can transform China’s centralized authority into a proactive, evidence-based model of digital health governance. Informed by these comparative insights, the following policy directions outline concrete pathways for strengthening governance and addressing the systemic drivers of addictive design.

4.3. Policy and Legislative Recommendations for Addressing Addictive Design Features

To build a more effective and forward-looking regulatory framework, targeted reforms that can intervene at key leverage points will be essential.

4.3.1. Establish a Risk-Tiered Oversight System Based on the Precautionary Principle

Effective governance of addictive design features requires a shift from reactive enforcement to anticipatory regulation. Given the rapid evolution of AI-driven engagement strategies, their cumulative effects on adolescent development, and the deep information asymmetry between platforms and regulators, China should adopt a precautionary framework for digital wellbeing governance.
There is a growing consensus that the precautionary principle should be introduced to address the legal challenge of decision-making under uncertainty. This principle, rooted in public health and environmental law, justifies regulatory action when plausible risks exist even without definitive causal evidence (; ). Originating in Germany’s Vorsorgeprinzip and institutionalized in EU approaches to genetically modified organisms and chemical safety (), it provides both ethical justification and strategic flexibility for governing design-based risks. Three factors make this principle essential for digital regulation. First, adaptive systems generate emergent and cumulative effects. By the time longitudinal studies yield definitive evidence of harm, adolescent groups may already have been exposed during critical developmental windows of heightened neuroplasticity. Second, the opacity of algorithmic systematically disadvantage regulators. Platforms possess massive behavioral datasets and testing capacities that allow rapid innovation in manipulation techniques, leaving regulators perpetually behind. Third, the developmental vulnerabilities of adolescents justify heightened protective standards. When commercial systems systematically target populations with limited decision-making capacity, waiting for conclusive harm evidence before intervening violates basic protective duties established in other consumer safety domains.
Critics claim the precautionary principle stifles innovation by demanding the impossible task of proving absolute safety, while neglecting “false positives” (Type I errors), such as rejecting beneficial technologies, whose hidden costs may far exceed the hypothetical harms the principle seeks to prevent (). However, these critiques misrepresents proportionate precaution as blanket prohibition. When designed transparently, some engagement mechanisms—such as gamified feedback or time-tracking—can enhance motivation or learning. Regulation should therefore distinguish adaptive from exploitative engagement designs, not eliminate them wholesale. A well-designed precautionary framework can establish risk-tiered safeguards calibrated to potential harm. The EU’s AI Act demonstrates this model by categorizing systems according to risk levels and imposing proportionate obligations that allow beneficial innovation while constraining harmful applications.
To balance innovation and protection, China should adopt a flexible, risk-tiered system based on a three-layer typology of addictive design features. The system would classify interface and algorithmic features by their empirically documented ability to exploit adolescent vulnerabilities and cause compulsive use, assigning proportionate obligations to each tier. High-risk features, those that directly manipulate cognitive processes and decision-making autonomy and have strong empirical links to addictive potential, would face the strictest controls. Examples include infinite scroll, which removes natural stopping cues and leverages action-completion bias, and autoplay, which eliminates deliberate choice points and defaults users into uninterrupted consumption streams. Such features should be required to carry a reverse burden of proof requiring platforms to provide independent research showing no disproportionate harm to adolescents before deployment. In services predominantly used by minors (e.g., educational apps, youth-oriented platforms), high-risk features should face prohibition. Medium-risk features would be subject to periodic external audits and mandatory transparency reporting. Low-risk addictive features would remain under voluntary guidelines and user-driven controls.
This proportionate approach reframes precaution from an innovation constraint into a tool for responsible foresight. By matching risk tiers with adaptive oversight, regulators can avoid overreach while staying alert to emerging harms. Independent advisory panels, consisting of developmental psychologists, addiction researchers, human–computer interaction specialists, and data scientists, should conduct continuous evidence-based reviews of design features as they evolve. Regular reassessments will keep regulatory standards current, responsive and scientifically grounded. Embedding the precautionary principle in digital-wellbeing governance thus makes regulation a proactive design mandate. It shifts the evidentiary burden onto actors who profit from adolescent attention, mitigates structural asymmetries of knowledge and power, and institutionalizes prevention.

4.3.2. Define “Addictive Design Features” in Statutory Language

The absence of clear legal definitions poses a significant barrier to effective regulation. Terms like “inducing addiction,” as used in documents such as the PAARIIS, lack sufficient specificity to guide enforcement or judicial interpretation meaningfully. This vagueness impedes accountability and limits the ability to target particularly harmful practices that exploit adolescent vulnerabilities. To address this critical gap, China should adopt a legally enforceable definition of addictive design features that distinguishes them from neutral persuasive or deceptive techniques.
Building on the typology introduced in this study and aligned with prior scholarship (; ), addictive design features can be defined as platform design features or interaction mechanisms that intentionally exploit cognitive or psychological vulnerabilities to induce compulsive user engagement, particularly among vulnerable populations, in ways that harm mental health or wellbeing. This definition centers on three regulatory criteria: (1) intentionality, referring to design elements specifically calibrated to maximize engagement; (2) compulsiveness, referring to diminished user autonomy and impaired disengagement; and (3) disproportionate harm, especially among adolescents or other vulnerable populations. These criteria enable regulators to distinguish addictive design features from other manipulative or deceptive designs, thereby ensuring targeted enforcement.
This definitional approach seeks to balance protection with developmental autonomy. Critics may contend that stringent standards impose excessive paternalism. However, autonomy—understood as the capacity to govern one’s life based on reasons and intentions rather than stimulus-response patterns (; )—is precisely what addictive design features undermine. The three criteria target design elements that subvert rather than merely influence decision-making capacity. By focusing on intentionality, compulsiveness, and disproportionate harm, this definition distinguishes exploitative patterns that erode developing agency from persuasive features that engage users within the bounds of autonomous choice. The regulatory aim is not to eliminate adolescent access to platforms but to ensure that such engagement occurs free from manipulative designs that exploit developmental vulnerabilities, thereby safeguarding the conditions necessary for genuine autonomy to develop.
The three-tiered typology makes the definition operational by classifying features according to strategic intent, psychological mechanisms, and specific design patterns, while identifying which AIDF components generate compulsive engagement. This framework allows for differentiated regulatory obligations, whereby medium-risk features would require mandatory disclosure, while high-risk patterns targeting minors would face prohibition. For instance, TikTok Lite’s rewards program illustrates how these criteria function in practice: (1) Intentionality, is apparent in the system’s deliberate monetization of watch time through algorithmically optimized variable reward schedules (Controller) designed to maximize user engagement. (2) Compulsiveness emerges through the AIDF reinforcement loop in which monetary incentives trigger dopaminergic responses (Actuator), and visible point accumulation, daily missions, and countdown timers (Interface) generate urgency while diminishing natural stopping cues. Meanwhile, the Sensor continuously monitors viewing time and feeds this data back to the Controller to refine reward allocation, thereby weakening users’ ability to disengage voluntarily. (3) Disproportionate harm is particularly notable in adolescents, who are more susceptible to these variable reward mechanisms. When a design feature meets all three criteria, as this reward mechanism does, it calls for strong regulatory intervention or prohibition. Conversely, time-management tools that genuinely foster self-regulation and do not rely on intentional exploitation or compulsive designs should be exempt from stringent requirements.
Effective implementation requires balancing clarity with adaptability. Legal definitions must remain responsive to emerging design strategies while maintaining stable enforcement thresholds. Regulatory standards should target the underlying design architectures perpetuating compulsive engagement rather than merely observable behavioral outcomes, aligning legal frameworks with the AIDF’s systemic perspective. Continuous refinement through regulatory guidance and judicial interpretation will ensure definitions remain current as technologies evolve.

4.3.3. Mandate Anti-Addiction Design and Addictive Design Labeling

To protect adolescents from the harms of addictive design features, regulation should intervene both at the design stage and through user-facing transparency tools.
  • Anti-Addiction by Design
One essential approach is “anti-addiction by design”, inspired by the principle of “privacy by design” (). This strategy shifts the duty of care to platforms. Instead of relying on users to self-control, platforms must prevent harm through responsible design. Interfaces should discourage compulsive interaction, limit repetitive reward loops, avoid deceptive or coercive prompts, and minimize data-driven personalization that amplifies dependence.
For adolescent users, platforms should apply developmental and accessibility principles to offer clear, intuitive, and non-intrusive interfaces aligned with their cognitive capacity. Effective practices include mandating simplified navigation structures, and the strategic introduction of friction elements, which require conscious decisions before extended use. Platforms can further mitigate time distortion by refining autoplay and recommendation algorithms and integrating visible time-use reminders (). These design-stage interventions create built-in “negative feedback loops” that counteract the reinforcing cycles identified in the AIDF. By altering how user input I(t) is processed and how platform output O(t + 1) is generated, such measures effectively interrupt or dampen reinforcing cycles of compulsive engagement. This approach aligns with the “nudge theory” (), which emphasize structuring choice environments to support better decision-making while preserving user autonomy.
  • Enforce Addictive Design Labels
A complementary measure is the introduction of standardized “addictive design labels.” These labels would visibly mark digital features or apps that meet defined thresholds of addictive potential, based on factors such as feedback intensity, personalization depth, and reward-based engagement cycles. Labels should appear at critical decision points, such as during application download, onboarding, or after extended use, to inform adolescents and caregivers before prolonged exposure.
A graded risk system, aligned with the three-tiered typology, could classify features as low, medium, or high risk. While specific thresholds will require further empirical validation, this system provides a scalable foundation for differentiated regulatory measures. In addition to informing users, these labels can create reputational pressure on platforms, similar to public health warnings in the food and tobacco sectors. Although empirical studies on addiction-specific labels are limited, evidence from similar contexts suggests they can effectively influence behavior. Nutritional labeling has shown that clear, standardized indicators can guide consumer choices, especially under cognitive constraints (). Similarly, privacy labeling in digital services has resulted in increased use awareness and more cautious behavior in response to accessible disclosures (). Well-designed addictive design labels can thus nudge adolescents toward more self-regulated digital habits.
Together, anti-addiction design and labeling create a two-tiered regulatory model. Structural design duties prevent harm at the system level, while transparent labeling supports informed engagement. Although further research is needed to refine risk criteria and assess long-term effects, these tools offer a precautionary yet innovation-friendly approach.

4.3.4. Implement Economic and Competition Tools to Realign Platform Incentives

Regulating addictive design requires addressing the business incentives that drive over-engagement. Platform revenue models built on attention monetization inherently conflict with adolescent wellbeing. To restore balance, regulators should combine economic disincentives and competition law to align private incentives with public health goals.
  • Adopt Pigouvian Taxation Policies
One option is the introduction of a Pigouvian tax on addictive design features. Grounded in welfare economics, this approach seeks to internalize negative externalities by imposing costs on socially harmful activities (). In the context of digital platforms, a tax could target specific design elements that research associates with compulsive engagement, particularly among adolescent users. By raising the marginal cost of deploying harmful features, the policy encourages platforms to adopt safer design alternatives ().
Beyond deterrence, a Pigouvian tax carries strong symbolic and normative value. Like environmental levies that signal commitment to sustainability, a tax on addictive design features would affirm collective rejection of exploitative digital practices. Tax revenue could fund adolescent mental health programs, digital literacy education, and the development of independent oversight. Although empirical modeling of digital addiction externalities remains limited, experience from tobacco, alcohol, and sugar taxation shows that targeted fiscal measures can change both corporate behavior and user outcomes. Pilot programs with narrowly defined scope could therefore serve as a feasible first step in an experimental regulatory framework ().
  • Innovate Antitrust and Competition Law
Competition law offers another path to reshape platform incentives. Traditional antitrust enforcement focuses on price and output, but recent scholarship proposes expanding the consumer welfare standards to include non-price harms including psychological impacts, attention drain, and autonomy erosion (). When dominant platforms deploy addictive design features that distort user decision-making or block market entry for ethical alternatives, their conduct may qualify as exclusionary practices under antitrust principles.
Reframing the attention economy as a relevant antitrust market enables regulators to analyze how engagement-maximizing designs influence competition and user choice. If these features systematically erode mental wellbeing, the resulting harms constitute measurable injuries justifying regulatory scrutiny. Potential remedies include requiring age-appropriate default interfaces that minimize addictive engagement for minors. Following the UK Age-Appropriate Design Code, China should adopt an age-appropriate default interface, making “Minors’ Mode” the standard setting for child users. This should be strengthened by disabling autoplay and reducing personalization intensity to ensure consistent application across platforms. Further, imposing platform interoperability requirements and conducting mandatory design audits for dominant firms would help reduce user lock-in and increase accountability. These interventions serve as system-level negative feedback mechanisms within the AIDF, countering the self-reinforcing dynamics of compulsive engagement.
Pigouvian taxation and antitrust enforcement provide complementary strategies for reshaping platform behavior through fiscal deterrence and legal accountability. Economic tools adjust cost structures while funding social mitigation measures, while competition law establishes acceptable design boundaries and encourages ethical innovation.

4.3.5. Construct Institutional Safeguards and Oversight Infrastructure

Industry advocates often argue that self-regulation offers greater agility than state intervention given rapid technological innovation and the risk of bureaucratic overreach. We acknowledge this concern and agree that industry participation is important for agility and technical cooperation. However, empirical evidence suggests that voluntary codes alone rarely constrain profit-driven design choices absent external accountability (). A credible governance framework must therefore embed co-regulatory mechanisms in which industry initiatives operate under public supervision and are subject to independent review. Critics frequently question regulators’ capacity to govern complex algorithmic systems because of persistent technical knowledge gaps. We do not underestimate this challenge. Accordingly, the model proposed here seeks to operationalize technical expertise within the governance architecture rather than to assume its prior existence. Specifically, the following institutional safeguards are designed to close the knowledge asymmetry while preserving regulatory agility and avoiding undue innovation costs.
  • Create Independent Audit and Oversight Infrastructure
In the AIDF model, the Controller serves as the algorithmic core that converts user input into system responses, shaping engagement feedback loops. However, current regulatory frameworks lack effective measures to assess whether these processes exploit adolescent vulnerabilities or promote compulsive use. Inspired by the EU’s DSA (Article 37), China should create independent algorithmic auditing bodies authorized to access and evaluate platform systems for minors. These bodies should include interdisciplinary experts. Platforms with over one million users must conduct annual algorithmic impact assessments that evaluate personalization targeting minors, engagement-maximizing mechanisms, session-prolonging design elements, and the effectiveness of youth protection measures. Audit reports should provide confidential technical findings for regulators while including public summaries that disclose key results without revealing trade secrets, ensuring transparency and protecting intellectual property. We recognize that the enforceability of such frameworks may face challenges, particularly regarding the independence of auditors and potential conflicts of interest. Clear operational guidelines and accountability protocols are therefore essential to preserve auditor credibility and ensure that oversight remains both impartial and effective.
  • Foster Multi-Stakeholder Governance Architecture
Effective governance of addictive design features requires collaborative engagement beyond centralized administrative measures. Given the interdisciplinary nature of these harms and the significant knowledge asymmetry between regulators and platforms, China should establish a multi-stakeholder governance model () that includes government agencies, platform operators, industry associations, academic institutions, and civil society organizations. Each stakeholder should have distinct but complementary roles.
Public authorities set regulatory standards and coordinate their implementation, offering incentives such as policy benefits to platforms that exceed minimum requirements while imposing reputational penalties for persistent violations. Industry associations can translate legal mandates into sector-specific codes of conduct that promote ethical design norms, explicit wellbeing protections, and principles aimed at minimizing addictive potential. Digital platforms need to take primary responsibility for managing design-related risks through robust internal oversight, which includes independent ethics committees and adolescent-centered design audits. Under China’s Personal Information Protection Law, large platforms are already required to publish reports on personal information protection and social responsibility (Article 58). These disclosures should expand to include algorithmic transparency and behavioral impact assessments of how design components affect adolescent engagement and well-being.
Independent third parties, such as academic institutions, research centers, and nonprofit organizations, play a critical role in monitoring emerging risks and informing evidence-based policy, bridging the technical knowledge gap. Their specialized expertise in areas like algorithm auditing, human–computer interaction, and developmental psychology enables them to conduct rigorous assessments that regulatory agencies may lack the capacity to perform directly. To operationalize this function, authorities should empower accredited third-party auditors with legal mandates to access platform code, data, and design documentation relating to addictive design features under confidentiality protections, ensuring a robust and impartial scrutiny process. Civil society actors, including consumer advocates, youth protection groups, and parent associations, contribute by raising public awareness, lodging complaints, and initiating collective redress, thereby fostering social accountability that encourages platforms to move beyond minimal compliance.

4.3.6. Engage in Cross-Border Coordination to Prevent Regulatory Arbitrage

Strict domestic regulations may encourage regulatory arbitrage, prompting adolescents to use offshore platforms with weaker protections. However, this concern is mitigated by converging international norms targeting addictive design practices. When major jurisdictions impose comparable requirements, platforms face higher compliance costs for maintaining jurisdiction-specific frameworks that exploit regulatory gaps. This creates economic incentives for unified design standards that meet the most stringent requirements across key markets, reflecting the “Brussels Effect,” which suggests a “race to the top” (). Consequently, Chinese adolescents seeking alternative platforms would encounter increasingly similar protective measures, constraining arbitrage opportunities.
To reinforce this convergence, China should strengthen participation in multilateral and bilateral coordination frameworks. Collaboration with jurisdictions that have implemented similar frameworks could facilitate information sharing on emerging addictive design practices and joint enforcement against repeat offenders across markets. Such cooperation would create shared evidentiary standards and mutual recognition of audit results, improving enforcement efficiency and reducing duplicative oversight. Although smaller or encrypted platforms may evade direct regulation, they generally lack the scale and algorithmic sophistication of major social media operators such as TikTok, WeChat, or Instagram, which account for most youth exposure and harm. Regulatory inaction driven by fears of partial migration allows unrestricted manipulation to persist on mainstream platforms serving hundreds of millions of adolescents. Coordinated international action, even if incomplete, can significantly reduce systemic risks and establish a more consistent global baseline for adolescent digital well-being.

5. Conclusions

This study underscores the urgency of reforming China’s regulatory framework to address the growing risks posed by addictive design features, particularly for adolescents. Current regulations remain fragmented and lack both clear legal definitions and enforceable obligations for digital platforms. As AI-driven technologies increasingly optimize engagement through adaptive feedback systems, targeted and system-oriented legal interventions are essential to safeguard young users’ well-being.
This article proposes policy directions aligned with the AIDF model’s system components and feedback loops. These shift responsibility from individual users to institutional and corporate actors. These include embedding the precautionary principle, developing legally enforceable definitions, innovating regulatory tools such as transparency mandates and friction mechanisms, realigning economic incentives through fiscal or antitrust measures, and strengthening multi-stakeholder coordination. It further calls for independent algorithmic auditing mechanisms, periodic youth impact assessments, and cross-border regulatory cooperation to prevent regulatory arbitrage and ensure consistent global protection standards.
While the article provides a normative foundation for future inquiry, its findings are limited by a reliance on secondary data and a China-focused perspective. Future research should incorporate longitudinal studies on the psychological and behavioral effects of addictive design features, and comparative analyses of international regulatory models to refine evidence-based, youth-centered governance frameworks.

Author Contributions

Conceptualization, Y.Y. and F.Y.; methodology, Y.Y. and F.Y.; writing—original draft preparation, Y.Y.; writing—review and editing, F.Y.; project administration, Y.Y.; funding acquisition, Y.Y. and F.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Zhejiang Provincial Philosophy and Social Science Planning Special Research Project on “Research and Interpretation of the Spirit of the Third Plenary Session of the 20th CPC Central Committee and the Fifth Plenary Session of the 15th Zhejiang Provincial Party Committee,” [Project Title: Systemic Governance of Digital Addiction Among Minors in the AI Era], and by the Humanities and Social Sciences Youth Foundation, Ministry of Education of the People’s Republic of China [Grant Number: 23YJCZH277; Project Title: Research on Legal Regulation of Digital Cognitive Manipulation from the Perspective of Behavioral Market Failure].

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADHDAttention-deficit/hyperactivity disorder
AIDFAdaptive Interaction Design Framework
CACCyberspace Administration of China
DSADigital Services Act
DSMDiagnostic and Statistical Manual of Mental Disorders
EUEuropean Union
FOMOFear of Missing Out
GDPRGeneral Data Protection Regulation
ICDInternational Classification of Diseases
OfcomOffice of Communication
PAARIISProvisions on the Administration of Algorithm-powered Recommendations for Internet Information Services
PIUProblematic Internet Use
RPMCRegulation on the Protection of Minors in Cyberspace
SDGSustainable Development Goal
UNICEFUnited Nations International Children’s Emergency Fund
VLOPsVery Large Online Platforms
WHOWorld Health Organization

References

  1. Agencia Española de Protección de Datos (Spanish Data Protection Agency). (2024). Addictive patterns in the processing of personal data—Implications for data protection. Available online: https://www.aepd.es/en/guides/addictive-patterns-and-the-right-to-integrity.pdf (accessed on 18 August 2025).
  2. Agyapong-Opoku, N., Agyapong-Opoku, F., & Greenshaw, A. J. (2025). Effects of social media use on youth and adolescent mental health: A scoping review of reviews. Behavioral Sciences, 15(5), 574. [Google Scholar] [CrossRef]
  3. American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). American Psychiatric Association. [Google Scholar]
  4. Aragon-Correa, J. A., Marcus, A. A., & Vogel, D. (2020). The effects of mandatory and voluntary regulatory pressures on firms’ environmental strategies: A review and recommendations for future research. The Academy of Management Annals, 14(1), 339–365. [Google Scholar] [CrossRef]
  5. Aziz, M., Chemnad, K., Al-Harahsheh, S., Abdelmoneium, A. O., Bagdady, A., Hassan, D. A., & Ali, R. (2024). The influence of adolescents essential and non-essential use of technology and internet addiction on their physical and mental fatigues. Scientific Reports, 14, 1745. [Google Scholar] [CrossRef]
  6. Beltrán, M. (2025). Defining, classifying and identifying addictive patterns in digital products. IEEE Transactions on Technology and Society, 6(3), 314–323. [Google Scholar] [CrossRef]
  7. Bernstein v. US Dept. of State, 945 F. Supp. 1279. (1996, N.D. Cal.). (1996). Available online: https://law.justia.com/cases/federal/district-courts/FSupp/945/1279/1457799/ (accessed on 17 November 2025).
  8. Bhargava, V. R., & Velasquez, M. (2021). Ethics of the attention economy: The problem of social media addiction. Business Ethics Quarterly, 31(3), 321–359. [Google Scholar] [CrossRef]
  9. Bilge, M., Uçan, G., & Baydur, H. (2022). Investigating the association between adolescent internet addiction and parental attitudes. International Journal of Public Health, 67, 1605065. [Google Scholar] [CrossRef]
  10. Brandtzaeg, P., Følstad, A., & Skjuve, M. (2025). Emerging AI individualism: How young people integrate social AI into everyday life. Communication and Change, 1, 11. [Google Scholar] [CrossRef]
  11. Burnell, K., Andrade, F. C., & Hoyle, R. H. (2023). Longitudinal and daily associations between adolescent self-control and digital technology use. Developmental Psychology, 59(4), 720–732. [Google Scholar] [CrossRef]
  12. Burnett, H. S. (2009). Understanding the precautionary principle and its threat to human welfare. Social Philosophy and Policy, 26(2), 378–410. [Google Scholar] [CrossRef]
  13. California Legislature. (2022). The California age-appropriate design code act. Available online: https://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=202120220AB2273 (accessed on 18 August 2025).
  14. California Legislature. (2023). Senate bill 287. Available online: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB287 (accessed on 18 August 2025).
  15. Cemiloglu, D., Naiseh, M., Catania, M., Oinas-Kukkonen, H., & Ali, R. (2021). The fine line between persuasion and digital addiction. In R. Ali, B. Lugrin, & F. Charles (Eds.), Persuasive technology. PERSUASIVE 2021. Lecture notes in computer science (Vol. 12684, pp. 378–394). Springer. [Google Scholar] [CrossRef]
  16. Chamorro, L. S., Lallemand, C., & Gray, C. M. (2024, July 1–5). “My mother told me these things are always fake”—Understanding teenagers’ experiences with manipulative designs. 2024 ACM Designing Interactive Systems Conference (DIS’24), Copenhagen, Denmark. [Google Scholar] [CrossRef]
  17. Chen, H., Dong, G., & Li, K. (2023). Overview on brain function enhancement of Internet addicts through exercise intervention: Based on reward-execution-decision cycle. Frontiers in Psychiatry, 14, 1094583. [Google Scholar] [CrossRef]
  18. Chen, X. W., Hedman, A., Distler, V., & Koenig, V. (2023). Do persuasive designs make smartphones more addictive?—A mixed-methods study on Chinese university students. Computers in Human Behavior Reports, 10, 100299. [Google Scholar] [CrossRef]
  19. China Internet Network Information Center [CNNIC]. (2023). 5th National survey report on internet usage by minors. Available online: https://www.cnnic.net.cn/n4/2023/1225/c116-10908.html (accessed on 18 August 2025).
  20. China’s National Press and Publication Administration. (2019). Notice on preventing minors from becoming addicted to online games. Available online: https://www.nppa.gov.cn/xxfb/zcfg/gfxwj/201911/t20191119_4503.html (accessed on 17 November 2025).
  21. Cho, H., Choi, D., Kim, D., Kang, W. J., Choe, E. K., & Lee, S.-J. (2021). Reflect, not regret: Understanding regretful smartphone use with APP feature-level analysis. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 456. [Google Scholar] [CrossRef]
  22. Coiduras-Sanagustín, A., Manchado-Pérez, E., & García-Hernández, C. (2024). Understanding perspectives for product design on personal data privacy in internet of things (IoT): A systematic literature review (SLR). Heliyon, 10(9), e30357. [Google Scholar] [CrossRef]
  23. Crum, B. (2025). Brussels effect or experimentalism? The EU AI Act and global standard-setting. Internet Policy Review, 14(3). [Google Scholar] [CrossRef]
  24. Cyberspace Administration of China, Ministry of Industry and Information Technology of the People’s Republic of China, Ministry of Public Security of the People’s Republic of China & State Administration for Market Regulation. (2021). Provisions on the management of algorithmic recommendations in internet information services. Available online: https://www.gov.cn/zhengce/zhengceku/2022-01/04/content_5666429.htm (accessed on 17 November 2025).
  25. Cyberspace Administration of China, National Development and Reform Com-mission of the People’s Republic of China, Ministry of Education of the People’s Republic of China, Ministry of Science and Technology of the People’s Republic of China, Ministry of Industry and Information Technology of the People’s Republic of China, Ministry of Public Security of the People’s Republic of China & National Radio and Television Administration. (2023). Interim measures for generative artificial intelligence services. Available online: https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm (accessed on 17 November 2025).
  26. Day, G. (2022). Antitrust, attention, and the mental health crisis. Minnesota Law Review, 10, 1901–1957. [Google Scholar]
  27. De Conca, S. (2023). The present looks nothing like the Jetsons: Deceptive design in virtual assistants and the protection of the rights of users. Computer Law & Security Review, 51, 105866. [Google Scholar] [CrossRef]
  28. Del-Real, C., De Busser, E., & van den Berg, B. (2025). A systematic literature review of security and privacy by design principles, norms, and strategies for digital technologies. International Review of Law, Computers & Technology, 39, 374–405. [Google Scholar] [CrossRef]
  29. Ding, K., Shen, Y., Liu, Q., & Li, H. (2024). The effects of digital addiction on brain function and structure of children and adolescents: A scoping review. Healthcare, 12(1), 15. [Google Scholar] [CrossRef]
  30. Edenhofer, O., Franks, M., & Kalkuhl, M. (2021). Pigou in the 21st century: A tribute on the occasion of the 100th anniversary of the publication of the economics of welfare. International Tax and Public Finance, 28, 1090–1121. [Google Scholar] [CrossRef]
  31. Esposito, F., & Maciel Cathoud Ferreira, T. (2024). Addictive design as an unfair commercial practice: The case of hyper-engaging dark patterns. European Journal of Risk Regulation, 15(4), 999–1016. [Google Scholar] [CrossRef]
  32. European Parliament. (2023). European parliament resolution of 12 December 2023 on addictive design of online services and consumer protection in the EU single market (2023/2043(INI)). Available online: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0459_EN.html (accessed on 18 August 2025).
  33. European Union. (2022). Regulation (EU) 2022/2065 of the European parliament and of the council of 19 October 2022 on a single market for digital services and amending directive 2000/31/EC (digital services Act) (Text with EEA relevance). Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R2065 (accessed on 18 August 2025).
  34. European Union. (2024). Regulation (EU) 2024/1689 of the European parliament and of the council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial intelligence act). Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (accessed on 18 August 2025).
  35. Ferrari, G. J., Jr., da Silva, A. B., Meneghetti, A., Leite, C. R., Brust, C., Moreira, G. C., & Felden, É. P. G. (2024). Relationships between internet addiction, quality of life and sleep problems: A structural equation modeling analysis. Journal of Pediatrics, 100(3), 283–288. [Google Scholar] [CrossRef] [PubMed]
  36. Feunekes, G. I. J., Gortemaker, I. A., Willems, A. A., Lion, R., & van den Kommer, M. (2008). Front-of-pack nutrition labelling: Testing effectiveness of different nutrition labelling formats front-of-pack in four European countries. Appetite, 50(1), 57–70. [Google Scholar] [CrossRef]
  37. Fineberg, N. A., Demetrovics, Z., Stein, D. J., Ioannidis, K., Potenza, M. N., Grünblatt, E., Brand, M., Billieux, J., Carmi, L., King, D. L., Grant, J. E., Yücel, M., Dell’Osso, B., Rumpf, H. J., Hall, N., Hollander, E., Goudriaan, A., Menchon, J., Zohar, J., … Chamberlain, S. R. (2018). Manifesto for a european research network into problematic usage of the internet. European Neuropsychopharmacology, 28(11), 1232–1246. [Google Scholar] [CrossRef]
  38. Fogg, B. J. (1998, April 18–23). Persuasive computers: Perspectives and research directions. 1998 SIGCHI Conference on Human Factors in Computing Systems, Los Angeles, CA, USA. [Google Scholar] [CrossRef]
  39. Friedrich, O., Racine, E., Steinert, S., Pömsl, J., & Jox, R. J. (2018). An analysis of the impact of brain-computer interfaces on autonomy. Neuroethics, 14, 17–29. [Google Scholar] [CrossRef]
  40. Gao, Y. Y., Hu, Y., Wang, J. L., Liu, C., Im, H., Jin, W. P., Zhu, W. W., Ge, W., Zhao, G., Yao, Q., Wang, P. C., Zhang, M. M., Niu, X., He, Q. H., & Wang, Q. (2025). Neuroanatomical and functional substrates of the short video addiction and its association with brain transcriptomic and cellular architecture. NeuroImage, 307, 121029. [Google Scholar] [CrossRef]
  41. Geronimo, L. D., Braz, L., Fregnan, E., Palomba, F., & Bacchelli, A. (2020, April 25–30). UI dark patterns and where to find them: A study on mobile applications and user perception. 2020 CHI Conference on Human Factors in Computing Systems (CHI’20), Honolulu, HI, USA. [Google Scholar] [CrossRef]
  42. Gibson, J. J. (2015). The ecological approach to visual perception. Psychology Press. [Google Scholar]
  43. Giedd, J. N. (2020). Adolescent brain and the natural allure of digital media. Dialogues in Clinical Neuroscience, 22(2), 127–133. [Google Scholar] [CrossRef]
  44. Goldstein, B. D. (2001). The precautionary principle also applies to public health actions. American Journal of Public Health, 91, 1358–1361. [Google Scholar] [CrossRef]
  45. Gray, C. M., Santos, C., & Bielova, N. (2023, April 23–28). Towards a preliminary ontology of dark patterns knowledge. Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany. [Google Scholar] [CrossRef]
  46. Haghjoo, P., Siri, G., Soleimani, E., Farhangi, M. A., & Alesaeidi, S. (2022). Screen time increases overweight and obesity risk among adolescents: A systematic review and dose-response meta-analysis. BMC Primary Care, 23, 161. [Google Scholar] [CrossRef] [PubMed]
  47. Hartford, A., & Stein, D. J. (2022). Attentional harms and digital inequalities. JMIR Mental Health, 9(2), e30838. [Google Scholar] [CrossRef] [PubMed]
  48. Hartson, R. (2003). Cognitive, physical, sensory and functional affordances in interaction design. Behaviour & Information Technology, 22(5), 315–338. [Google Scholar] [CrossRef]
  49. Howard, S. J., Hayes, N., Mallawaarachchi, S., Johnson, D., Neilsen-Hewett, C., Mackenzie, J., Bentley, L. A., & White, S. L. J. (2025). A meta-analysis of self-regulation and digital recreation from birth to adolescence. Computers in Human Behavior, 163, 108472. [Google Scholar] [CrossRef]
  50. Huang, Z. Z., & Wang, Z. (2024). Research on the legal regulation of algorithms in online game addiction. Journal of Chongqing University of Posts and Telecommunications (Social Science Edition), 6, 55–64. [Google Scholar]
  51. Hutchinson, T., & Duncan, N. (2012). Defining and describing what we do: Doctrinal legal research. Deakin Law Review, 17(1), 83–119. [Google Scholar] [CrossRef]
  52. Ibrahim, L., Rocher, L., & Valdivia, A. (2024). Characterizing and modeling harms from interactions with design patterns in AI interfaces. arXiv, arXiv:2404.11370. [Google Scholar] [CrossRef]
  53. Isola, C., & Esposito, F. (2025). A systematic literature review on dark patterns for the legal community: Definitional clarity and a legal classification based on the Unfair Commercial Practices Directive. Computer Law & Security Review, 58, 106169. [Google Scholar] [CrossRef]
  54. Jeong, E. J., Kim, D. J., & Lee, D. M. (2016). Why do some people become addicted to digital games more easily? A study of digital game addiction from a psychosocial health perspective. International Journal of Human-Computer Interaction, 33(3), 199–214. [Google Scholar] [CrossRef]
  55. Jiang, Y., Yan, Z., & Yang, Z. (2025). Losing track of time on Tiktok? An experimental study of short video users’ time distortion. Behavioral Sciences, 15(7), 930. [Google Scholar] [CrossRef]
  56. Keles, B., McCrae, N., & Grealish, A. (2020). A systematic review: The influence of social media on depression, anxiety and psychological distress in adolescents. International Journal of Adolescence and Youth, 25(1), 79–93. [Google Scholar] [CrossRef]
  57. Kelley, P. G., Cesca, L., Bresee, J., & Cranor, L. F. (2010, April 10–15). Standardizing privacy notices: An online study of the nutrition label approach. 2010 SIGCHI Conference on Human Factors in Computing Systems (CHI’10), Atlanta, GA, USA. [Google Scholar] [CrossRef]
  58. Kiviruusu, O. (2024). Excessive internet use among Finnish young people between 2017 and 2021 and the effect of COVID-19. Social Psychiatry and Psychiatric Epidemiology, 59, 2291–2301. [Google Scholar] [CrossRef]
  59. Kumar, V., Ashraf, A. R., & Nadeem, W. (2024). AI-powered marketing: What, where, and how? International Journal of Information Management, 77, 102783. [Google Scholar] [CrossRef]
  60. Kuss, D. J., van Rooij, A. J., Shorter, G. W., Griffiths, M. D., & van De Mheen, D. (2013). Internet addiction in adolescents: Prevalence and risk factors. Computers in Human Behavior, 29(5), 1987–1996. [Google Scholar] [CrossRef]
  61. Langvardt, K. (2019). Regulating habit-forming technology. Fordham Law Review, 88, 129–185. [Google Scholar] [CrossRef]
  62. Lee, U., Lee, J., Ko, M., Lee, C., Kim, Y., Yang, S., Yatani, K., Gweon, G., Chung, K.-M., & Song, J. (2014, April 26–May 1). Hooked on smartphones: An exploratory study on smartphone overuse among college students. SIGCHI Conference on Human Factors in Computing Systems (CHI‘14), Toronto, ON, Canada. [Google Scholar] [CrossRef]
  63. Leeman, R. F., & Potenza, M. N. (2013). A targeted review of the neurobiology and genetics of behavioural addictions: An emerging area of research. Canadian Journal of Psychiatry, 58(5), 260–273. [Google Scholar] [CrossRef] [PubMed]
  64. Li, S., Ren, P., Chiu, M. M., Wang, C., & Lei, H. (2021). The relationship between self-control and internet addiction among students: A meta-analysis. Frontiers in Psychology, 12, 735755. [Google Scholar] [CrossRef] [PubMed]
  65. Li, Y. (2015). Perspectives on the challenges and impacts of targeted internet governance in China. The Press, 21, 122–125. [Google Scholar]
  66. Lin, W., & Wu, Y. S. (2022). On the minor mode of network protection: Legislation improvement, idea optimization and difficulties overcoming. Jilin University Journal (Social Science Edition), 5, 5–19+235. [Google Scholar] [CrossRef]
  67. Lin, Y.-H., Chiang, C.-L., Lin, P.-H., Chang, L.-R., Ko, C.-H., Lee, Y.-H., & Lin, S.-H. (2016). Proposed diagnostic criteria for smartphone addiction. PLoS ONE, 11(11), e0163010. [Google Scholar] [CrossRef]
  68. Lukoff, K., Lyngs, U., Zade, H., Liao, J. V., Choi, J., Fan, K., Munson, S. A., & Hiniker, A. (2021, May 8–13). How the design of Youtube influences user sense of agency. 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan. [Google Scholar] [CrossRef]
  69. Martuzzi, M. (2007). The precautionary principle: In action for public health. Occupational and Environmental Medicine, 64, 569–570. [Google Scholar] [CrossRef]
  70. Mathur, A., Acar, G., Friedman, M. J., Lucherini, E., Mayer, J., Chetty, M., & Narayanan, A. (2019). Dark patterns at scale: Findings from a crawl of 11k shopping websites. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 81. [Google Scholar] [CrossRef]
  71. Mathur, A., Kshirsagar, M., & Mayer, J. (2021, May 8–13). What makes a dark pattern… dark? design attributes, normative considerations, and measurement methods. 2021 CHI Conference on Human Factors in Computing Systems (CHI’21), Yokohama, Japan. [Google Scholar] [CrossRef]
  72. Matias, J. N. (2023). Humans and algorithms work together—So study them together. Nature, 617(7961), 248–251. [Google Scholar] [CrossRef]
  73. Méndez, M. L., Padrón, I., Fumero, A., & Marrero, R. J. (2024). Effects of internet and smartphone addiction on cognitive control in adolescents and young adults: A systematic review of fMRI studies. Neuroscience & Biobehavioral Reviews, 159, 105572. [Google Scholar] [CrossRef]
  74. Monge Roffarello, A., Lukoff, K., & De Russis, L. (2023, April 23–28). Defining and identifying attention capture deceptive designs in digital interfaces. 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany. [Google Scholar] [CrossRef]
  75. Nagata, J. M., Smith, N., Alsamman, S., Lee, C. M., Dooley, E. E., Kiss, O., Ganson, K. T., Wing, D., Baker, F. C., & Gabriel, K. P. (2023). Association of physical activity and screen time with body mass index among US adolescents. JAMA Network Open, 6(2), e2255466. [Google Scholar] [CrossRef] [PubMed]
  76. New York State Senate. (2023). Senate bill S7694A, the stop addictive feeds exploitation (SAFE) for kids act. Available online: https://www.nysenate.gov/legislation/bills/2023/S7694/amendment/A (accessed on 18 August 2025).
  77. Nise, N. S. (2015). Control systems engineering (7th ed.). Wiley Publisher. [Google Scholar]
  78. Norman, D. A. (1999). Affordance, conventions, and design. Interactions, 6(3), 38–42. [Google Scholar] [CrossRef]
  79. OECD. (2022). Dark commercial patterns. OECD digital economy papers (No. 336). OECD Publishing. [Google Scholar] [CrossRef]
  80. Park, S., Kang, M., & Kim, E. (2014). Social relationship on problematic internet use (PIU) among adolescents in South Korea: A moderated mediation model of self-esteem and self-control. Computers in Human Behavior, 38, 349–357. [Google Scholar] [CrossRef]
  81. Piko, B. F., Krajczár, S. K., & Kiss, H. (2024). Social media addiction, personality factors and fear of negative evaluation in a sample of young adults. Youth, 4(1), 357–368. [Google Scholar] [CrossRef]
  82. Reed, P. (2022). Impact of social media use on executive function. Computers in Human Behavior, 141, 107598. [Google Scholar] [CrossRef]
  83. Rossi, A., Carli, R., Botes, M. W., Fernandez, A., Sergeeva, A., & Chamorro, L. S. (2024). Who is vulnerable to deceptive design patterns? a transdisciplinary perspective on the multi-dimensional nature of digital vulnerability. Computer Law & Security Review, 55, 106031. [Google Scholar] [CrossRef]
  84. Schepis, T. S., Adinoff, B., & Rao, U. (2008). Neurobiological processes in adolescent addictive disorders. American Journal of Addictions, 17(1), 6–23. [Google Scholar] [CrossRef]
  85. Schittenhelm, C., Kops, M., Moosburner, M., Fischer, S. M., & Wachs, S. (2025). Cybergrooming victimization among young people: A systematic review of prevalence rates, risk factors, and outcomes. Adolescent Research Review, 10, 169–200. [Google Scholar] [CrossRef]
  86. Secretariat of the Office of the Central Cyberspace Affairs Commission, General Office of the Ministry of Industry and Information Technology, General Office of the Ministry of Public Security & General Office of the State Administration for Market Regulation. (2024). Notice on launching the special campaign for the “Qinglang: Governance of typical algorithmic issues on online platforms”. Available online: https://www.cac.gov.cn/2024-11/24/c_1734143936205514.htm (accessed on 18 August 2025).
  87. Servidio, R., Soraci, P., Griffiths, M. D., Boca, S., & Demetrovics, Z. (2024). Fear of missing out and problematic social media use: A serial mediation model of social comparison and self-esteem. Addictive Behaviors Reports, 19, 100536. [Google Scholar] [CrossRef]
  88. Shi, W., Zhao, Y., Zhou, J., & Shi, J. (2025). Differential neural reward processes in internet addiction: A systematic review of brain imaging research. Addictive Behaviors, 167, 108346. [Google Scholar] [CrossRef]
  89. Silvani, M. I., Werder, R., & Perret, C. (2022). The influence of blue light on sleep, performance and wellbeing in young adults: A systematic review. Frontiers in Physiology, 13, 943108. [Google Scholar] [CrossRef]
  90. Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, No. 4:22-MD03047-YGR, 2023 WL 2414002. (2023, N.D. Cal. Mar. 8). (2023). Available online: https://caselaw.findlaw.com/court/us-dis-crt-n-d-cal/115457456.html (accessed on 17 November 2025).
  91. Soe, T. H., Nordberg, O. E., Guribye, F., & Slavkovik, M. (2020, October 25–29). Circumvention by design—Dark patterns in cookie consent for online news outlets. 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society (NordiCHI‘20), Tallinn, Estonia. [Google Scholar] [CrossRef]
  92. Southern Metropolis Daily. (2019). Field test of “Youth Mode” on 20 video platforms. Available online: https://ishare.ifeng.com/c/s/7naewXkBw3h (accessed on 4 November 2025).
  93. Standing Committee of the National People’s Congress of China. (2020). Law on the protection of minors. Available online: http://www.moe.gov.cn/jyb_sjzl/sjzl_zcfg/zcfg_qtxgfl/202110/t20211025_574798.html (accessed on 17 November 2025).
  94. State Council of China. (2023). Regulations on the protection of minors in cyberspace. Available online: https://www.gov.cn/zhengce/content/202310/content_6911288.htm (accessed on 17 November 2025).
  95. Su, C., Zhou, H., Gong, L., Teng, B., Geng, F., & Hu, Y. (2021). Viewing personalized video clips recommended by TikTok activates default mode network and ventral tegmental area. NeuroImage, 237, 118136. [Google Scholar] [CrossRef]
  96. Sun, Y., Shan, Y., Xie, J., Chen, K., & Hu, J. (2024). The relationship between social media information sharing characteristics and problem behaviors among Chinese college students under recommendation algorithms. Psychology Research and Behavior Management, 17, 2783–2794. [Google Scholar] [CrossRef]
  97. Thaler, R. H. (2018). Nudge, not sludge. Science, 361(6401), 431. [Google Scholar] [CrossRef]
  98. Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press. [Google Scholar]
  99. Tokiya, M., Itani, O., Otsuka, Y., & Kaneita, Y. (2020). Relationship between internet addiction and sleep disturbance in high school students: A cross-sectional study. BMC Pediatrics, 20, 379. [Google Scholar] [CrossRef]
  100. Topan, A., Anol, S., Taşdelen, Y., & Kurt, A. (2025). Exploring the relationship between cyberbullying and technology addiction in adolescents. Public Health Nursing, 42(1), 33–43. [Google Scholar] [CrossRef] [PubMed]
  101. Tran, J. A., Yang, K. S., Davis, K., & Hiniker, A. (2019, May 4–9). Modeling the engagement-disengagement cycle of compulsive phone use. 2019 CHI Conference on Human Factors in Computing Systems (CHI’19), Glasgow, UK. [Google Scholar] [CrossRef]
  102. Twenge, J. M., Joiner, T. E., Rogers, M. L., & Martin, G. N. (2017). Increases in depressive symptoms, suicide-related outcomes, and suicide rates among U.S. adolescents after 2010 and links to increased new media screen time. Clinical Psychological Science, 6(1), 3–17. [Google Scholar] [CrossRef]
  103. UK Information Commissioner’s Office. (2020). Age appropriate design: A code of practice for online services. Available online: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/childrens-code-guidance-and-resources/age-appropriate-design-a-code-of-practice-for-online-services/ (accessed on 18 August 2025).
  104. UK Parliament. (2023). Online safety act 2023. Available online: https://www.legislation.gov.uk/ukpga/2023/50 (accessed on 18 August 2025).
  105. UNICEF. (2025). Childhood in a digital world: Screen time, digital skills and mental health. Available online: https://www.unicef.org/innocenti/innocenti/media/11296/file/UNICEF-Innocenti-Childhood-in-a-Digital%20World-report-2025.pdf (accessed on 18 August 2025).
  106. U.S. Senate. (2019). Social media addiction reduction technology act (SMART Act), S.2314—116th congress (2019–2020). Available online: https://www.congress.gov/bill/116th-congress/senate-bill/2314 (accessed on 18 August 2025).
  107. Vugts, A., van den Hoven, M., de Vet, E., & Verweij, M. F. (2020). How autonomy is understood in discussions on the ethics of nudging. Behavioural Public Policy, 4, 108–123. [Google Scholar] [CrossRef]
  108. Wang, P., Yuan, X., Gao, T., Wang, X., Xing, Q., Cheng, X., & Ming, Y. (2025). Problematic internet use in early adolescence: Gender and depression differences in a latent growth curve model. Humanities and Social Sciences Communications, 12, 368. [Google Scholar] [CrossRef]
  109. Wang, Y., & Zhang, W. Y. (2025). The evaluation of platform addictive algorithms governance in the EU and the United States and the exploration of China’s “multi-stakeholder governance” mode. International Economic and Trade Research, 6, 108–121. [Google Scholar] [CrossRef]
  110. Weed, D. L. (2004). Precaution, prevention, and public health ethics. Journal of Medicine and Philosophy, 29(3), 313–332. [Google Scholar] [CrossRef] [PubMed]
  111. WHO. (2024). Teens, screens and mental health: New WHO report indicates need for healthier online habits among adolescents. Available online: https://www.who.int/europe/news/item/25-09-2024-teens--screens-and-mental-health (accessed on 18 August 2025).
  112. WHO. (2025). International classification of diseases for mortality and morbidity statistics (11th revision). Available online: https://icd.who.int/browse/2025-01/mms/en (accessed on 18 August 2025).
  113. Xiang, C., & Lu, X. (2024). Digital addiction: Formation mechanisms, risk assessment, and regulatory approaches. E-Government, 12, 108–120. [Google Scholar] [CrossRef]
  114. Yao, Y. (2025). Regulating addictive algorithms and designs: Protecting older adults from digital exploitation beyond a youth-centric approach. Frontiers in Psychology, 16, 1579604. [Google Scholar] [CrossRef]
  115. Ye, X. (2025). Dark patterns and addictive designs. Weizenbaum Journal of the Digital Society, 5, 1–24. [Google Scholar] [CrossRef]
  116. Zac, A., Huang, Y. C., von Moltke, A., Decker, C., & Ezrachi, A. (2025). Dark patterns and consumer vulnerability. Behavioral Public Policy, 1–50. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.