1. Introduction
Artificial intelligence (AI) is rapidly penetrating and transforming various industries. From mobile healthcare to autonomous driving, from intelligent recommendation algorithms to quantitative investment models, AI technology shows significant potential in improving efficiency and optimizing decision-making. In the field of social governance, especially in law enforcement and judicial practices closely related to civil rights, AI applications are similarly becoming increasingly widespread (
Grimm et al. 2024). However, like every technological revolution throughout history, the widespread application of AI is a double-edged sword. While delivering benefits, it also poses serious challenges to society’s existing value systems, ethical norms, and especially to the long-established legal frameworks and fundamental rights-protection systems.
When algorithms become deeply involved in or even dominate key decisions involving personal freedom, property, dignity, and even life, a series of profound rights conflicts and governance dilemmas emerge. The opacity of algorithmic decision processes may erode the right to information and defense required by due process; biases lurking in training data may be solidified or even amplified by algorithms, creating systemic discrimination against specific groups and threatening the principle of equal protection; large-scale data-collection and analytical processing capabilities pose serious challenges to personal privacy and data self-determination rights; and the proliferation of automated decision-making may weaken human agency and moral responsibility (
Kaminski 2023). These risks are particularly prominent in law enforcement and judicial domains, as decisions in these areas often directly relate to the legitimate exercise of state power. Many countries face a major contemporary issue: how to manage the risks AI poses to fundamental rights. This requires effectively identifying, preventing, and regulating these risks while still embracing AI’s technological benefits. Crucially, AI’s development and application must always conform to the rule of law, respect human rights, and serve the public interest.
As a significant force in global AI technology development and application, China is also experiencing rapid advancement in AI applications within social governance. On the one hand, a data law system with Chinese characteristics has been preliminarily established, and regulatory bodies have successively issued specialized regulations for data and AI governance. On the other hand, China’s AI-governance practices face unique challenges: the imbalance between technological application and regulatory capacity has led to governance lag, insufficient professional expertise and standards affect regulatory effectiveness, and targeted rights-protection mechanisms in key areas such as law enforcement and judicial practice are lacking (
Zou and Zhang 2025;
B. Chen and Chen 2024;
Huang et al. 2024). The complexity of these challenges is particularly prominent within China’s specific institutional background, requiring both learning from international experience and avoiding simple transplantation of Western models.
This paper aims to provide references for improving China’s AI regulatory approaches with a focus on the protection of citizens’ fundamental rights by examining typical cases and governance norms in the EU, the US, Japan, and South Korea, which represent some of the most advanced AI practices in the world. This paper will discuss the following key issues around the theme of current AI applications and rights conflicts: What major rights conflicts have been triggered by specific AI applications in the social governance field? How do existing fundamental rights theories face challenges due to AI characteristics? What responses have different global jurisdictions made at the legal system level? Based on China’s national conditions, what positive international experiences and theories can be drawn upon to improve the AI regulatory system?
This paper is divided into six sections.
Section 2 presents the landscape of AI in social governance and the rights conflicts they trigger, raising core issues.
Section 3 delves into the theoretical level, exploring the evolution of relevant fundamental rights and new challenges brought by AI, laying the theoretical foundation for subsequent analysis.
Section 4 turns to the practical level, analyzing major global AI-governance models and demonstrating diverse paths.
Section 5, building on the previous three sections, focuses on China, proposing specific legal improvement paths and institutional recommendations.
Section 6 summarizes the paper and looks forward to future research directions and practical prospects.
2. The Reality of AI in Social Governance
This section examines AI applications in three areas: judicial assistance, technology-enabled law enforcement, and welfare supervision. These cases reveal how AI enhances social governance but also poses severe challenges to citizens’ fundamental rights.
2.1. Judicial Assistance: Efficiency Enhancement and Due Process Dilemmas
In recent years, AI in the judicial field has continuously increased, showing diverse development trends. Particularly in criminal justice, AI-driven judicial assistance systems have been gradually adopted in some jurisdictions. For example, Chinese courts have widely deployed smart court systems that connect various government departments and the social credit system, providing recommendations to judges and streamlining penalty procedures. However, there are concerns that this marks the beginning of technology companies and capital eroding judicial power (
S. Chen 2022). In the US, COMPAS is a notable example. Through a set of confidential algorithms, this system employs big data analysis to establish criminal risk-assessment models that predict the probability of defendants reoffending, absconding, and other behaviors, assisting judges in their judicial decision-making. Proponents argue that these emotion-neutral tools mitigate judges’ subjective biases in decision-making (
Chatziathanasiou 2022).
However, the application of judicial assistance systems is not without controversy, with the case of
State v. Loomis (
2016) in Wisconsin, USA, being a classic example. In this case, Loomis was accused of participating in a shooting incident. He denied involvement in the shooting but pleaded guilty to two other lesser charges. COMPAS’s assessment results deemed him at extremely high risk of reoffending. During sentencing, the judge, considering both Loomis’s criminal record and COMPAS’s assessment results, sentenced him to six years in prison. Loomis appealed, with his main argument being that the judge’s decision relied on a confidential algorithm that neither the public nor the parties involved could examine its operational logic, nor could they effectively cross-examine and debate it, violating the principle of due process. Moreover, the algorithm’s assessment basis might be derived from group data rather than individual evaluations tailored to the specific case, violating the principle of individualized sentencing. The state supreme court upheld the lower court’s decision, finding that referencing COMPAS during sentencing did not violate the defendant’s due process rights if properly used by judges. However, the court also imposed some limitations, emphasizing that such assessment results should not be used alone to determine the length of sentences, and judges must provide certain reasoning when adopting assessment results (
Beriain 2018).
Academics have contributed much research into the accuracy and fairness of tools such as COMPAS. Despite many different viewpoints, research results generally tend to suggest that COMPAS may not have truly served its role in supporting judicial decision-making.
Dressel and Farid (
2018) have found that COMPAS offers no advantage in predictive ability; its accuracy is comparable to models constructed using just two simple variables—age and number of prior offenses—as well as to predictions made by ordinary people without professional training. Moreover, the probability of Black individuals being incorrectly predicted as having a high risk of reoffending is nearly twice that of White individuals. However, this research also points out that ordinary people’s predictions exhibit similar biases. This suggests that such algorithmic bias may originate from humans themselves. Another study by
Lagioia et al. (
2023) also indicates that the training data used by COMPAS may contain historical or structural biases (such as income, education, race, residence, etc.).
Similar controversial cases are too numerous to list. In the case of
State v. Guise (
2018) in Iowa, USA, defendant Guise was charged with second-degree theft and other crimes. The district court used a risk-assessment tool called IRR during sentencing, which recommended enhanced sentencing for Guise. However, the appellate court overturned the district court’s decision, its core conclusion being that the use of IRR during sentencing lacked legislative authorization and thus undermined the judgment’s legality. A judge noted that the law has never permitted such tools to be used in court without providing statistical context (
Garrett and Monahan 2019). However, the state supreme court ultimately overturned the appellate court’s view and upheld the district court’s decision.
It is worth noting that some research (e.g.,
Kopkin et al. 2017) suggests these tools can reduce imprisonment. Especially for low-risk offenders, they might encourage judges to lean more toward sentencing them to community corrections rather than incarceration. Even so, these scholars still emphasize the critical necessity of improving algorithmic transparency and reducing bias. These cases collectively reveal the “algorithmic black box” problem in judicial assistance systems, while biases inherited from humans themselves may be continuously amplified through algorithms.
2.2. Technology-Enabled Law Enforcement: Precise Governance and Diverse Rights Conflicts
The proliferation of technology-enabled law enforcement systems, especially AI-driven predictive policing and facial recognition technology, aims to improve the accuracy and coverage of law enforcement activities. Compared to traditional enforcement methods that primarily rely on human patrols, on-site interventions, and retrospective investigations, these systems are believed to enable faster and more intelligent identification, prevention, and combating of illegal activities (
C. Li 2020). For example, many Chinese cities use facial recognition and surveillance cameras to crack down on jaywalkers, displaying their names, photos, and ID numbers on public screens as a deterrent (
T. Li 2018). However, while improving efficiency, these systems also pose systematic challenges to multiple fundamental civil rights. Some studies (e.g.,
Söderholm 2023) show that authorities in different regions are fully aware of the risks involved, but due to high expectations for AI in maintaining social stability, the core issue has shifted to how to seek a delicate balance between technological application and the protection of fundamental rights.
Facial recognition currently may be one of the most controversial technology-enabled law enforcement tools. Unlike traditional passive surveillance cameras, facial recognition systems can automatically identify, track, and analyze individual behavior, possessing unprecedented capabilities for precise monitoring and identity confirmation. The large-scale, normalized application of this technology directly impacts citizens’ privacy rights (
Tan 2018). Citizens cannot opt out of this monitoring, nor can they easily know how their facial data is being used, damaging individuals’ data self-determination rights over their sensitive biometric information. The accuracy of facial recognition is also frequently questioned. In a 2024 report, the U.S. Commission on Civil Rights pointed out that facial recognition has lower identification accuracy for people of color and women, which could lead to erroneous identity designations, unjust suspicions, or even wrongful arrests, constituting a violation of the principle of equal protection (
Arshad 2024).
Predictive policing systems have similarly triggered profound rights conflicts. These systems predict the risk of crimes occurring in specific areas based on historical crime data, demographic information, and environmental factors.
O’Donnell (
2019) points out that due to historical selective enforcement and systemic discrimination, people of color and low-income areas are often overrepresented in databases. This leads systems to tend to deploy more police forces to these areas, forming a “feedback loop”: more police presence means more arrests, and more arrests in turn reinforce the system’s risk assessment of that area, constituting a form of algorithmic discrimination. Interestingly, similar to the Loomis case, while some legal scholars (e.g.,
Lee et al. 2024) are apprehensive about the deployment of predictive policing systems, one empirical work from
Brantingham et al. (
2018) based on statistics show that after deploying such systems, there have been no significant differences in arrest proportions across ethnic groups, and the total number of arrests has even shown decreasing or stable trends.
Additionally, there exists a more covert type of behavioral analysis system that attempts to identify potential abnormal behaviors by monitoring citizens’ behavioral patterns, social relationships, and online activities. As technology continuously evolves, the boundaries of private domains also keep changing. Traditionally, physical spaces such as residences were viewed as private domains protected by constitutional rights, yet the extent to which virtual spaces are based on network technology and the resulting personal data belong to the private sphere remains subject to ongoing legal challenge. For example, as recently disclosed by
The Washington Post, the UK Home Office, under the Investigatory Powers Act, requested that Apple Inc. provide access to fully encrypted data, not just access to specific accounts (
Menn 2025).
Saura et al. (
2022) indicate that for governments, privacy itself is a resource that can be used for social monitoring. The application of such technologies further blurs the boundaries between legitimate investigation and illegal surveillance. When people realize they are being continuously monitored, they often tend toward self-censorship, producing a “chilling effect” (
Ayoub and Griffiths 2023). Evidently, the current legal framework urgently needs to establish rights-protection mechanisms that keep pace with technological development, seeking a dynamic balance between public security and individual freedom.
2.3. Welfare Supervision: Fairness and Dignity Under Automated Decision-Making
Automated decision-making is also widely applied in areas affecting people’s livelihoods. SyRI was an AI system deployed by the Dutch government from 2015 until 2020 that integrated large amounts of personal data from multiple government departments to flag individuals or households deemed to have a higher risk of fraud among welfare recipients. Critics (e.g.,
Toh 2020) point out that the system lacks transparency, with its models and indicators kept secret from the public and even courts, making it difficult for outsiders to assess SyRI’s accuracy and fairness, and to ensure it fulfills data-protection obligations. Particularly concerning is that it is primarily used in low-income and minority ethnic neighborhoods, causing these groups to be subjected to excessive surveillance and stigmatization. Scholars (e.g.,
Wieringa 2023;
Rachovitsa and Johann 2022) generally believe that SyRI’s infringement on citizens’ fundamental rights is disproportionate to the national interests it attempts to achieve. The legal challenge against SyRI ultimately succeeded in 2020. The Hague District Court ruled that SyRI failed to safeguard personal privacy, operated in a non-transparent manner adequately, lacked sufficient checks and balances, and violated Article 8 of the European Convention on Human Rights concerning respect for private and family life (
NJCM et al. v. The Dutch State 2020). Additionally, the court expressed concerns that SyRI might systematically discriminate based on socioeconomic status or migration background (
van Bekkum and Borgesius 2021).
Similarly, Australia once implemented an automated welfare debt-recovery program known as “Robodebt.” Due to the lack of effective human oversight and appeal channels, this program even led some welfare recipients to suicide (
Rinta-Kahila et al. 2024). Ironically, Robodebt’s official name was “Online Compliance Intervention,” and its core mechanism utilized algorithms to automatically compare annual income data from the tax office with periodic income declaration data for distributing benefits used by the social welfare department, to identify those who may have received excess welfare automatically. This calculation method itself had obvious flaws, failing to consider seasonal fluctuations or the temporary nature of income. The system automatically generated debt notices and completely reversed the burden of proof onto welfare recipients, requiring them to prove their innocence. If they could not provide detailed income evidence from years prior (which was extremely difficult for many low-income individuals and temporary workers), the system would initiate debt-collection procedures, including commissioning third-party debt-collection agencies or directly deducting them from their subsequent welfare payments (
van Krieken 2024). In 2019, the Federal Court ruled that the calculation method used by Robodebt was illegal, forcing the Australian right-wing government to abolish the corresponding debt-recovery program.
These two cases fully reflect that if automated decision-making lacks sufficient transparency, accuracy, fairness considerations, as well as effective human oversight and remedy mechanisms, it may cause catastrophic consequences to the fundamental rights and dignity of vulnerable groups. It should be noted that compared to countries like Australia and the Netherlands, which are generally considered to have high levels of social welfare, other countries with lower levels exhibit different characteristics in this regard (for example, China’s social welfare expenditure in 2021 accounted for only 2.96% of GDP). In these countries, discrimination and biases based on attributes such as identity, household registration, and birthplace were already institutionalized long before automated decision-making appeared, and rapid economic development has not diminished this phenomenon (
Cheng et al. 2022). In a sense, when automated decision-making systems are introduced, they do not create new inequalities but rather incorporate old inequalities into algorithms, making them more systematic and difficult to challenge. As some scholars like
Chowdhury (
2024) have stated, there are currently no effective regulatory mechanisms for these unfair or even erroneous automated decisions, allowing systems designed to satisfy political motives regardless of human consequences to be widely deployed. These cases collectively demonstrate that not all public decisions are suitable for AI processing, especially when many issues existing at social and human levels remain unresolved, appropriate boundaries must be set for AI systems.
3. The Evolution and Challenges of Data Rights
While current data rights are broader than traditional privacy rights, their history shows they were built upon privacy as a core concept. To understand the challenges to fundamental rights posed by AI, we first need to clearly outline the evolution of relevant rights and thoroughly explore the fundamental impact of AI’s technical characteristics on these rights. This section will first review how privacy rights theory gradually expanded from focusing on physical space to emphasizing the right to informational self-determination, eventually developing into the foundation of modern data-protection laws. Subsequently, it will analyze how key technical characteristics of AI challenge the evolved data rights theory. Finally, it will explore new governance paradigms and rights-protection concepts that academia is actively exploring and forming to address these challenges.
3.1. Digitalization of Fundamental Rights: From Privacy to Informational Self-Determination
The cornerstone of modern rule-of-law states lies in protecting citizens’ fundamental rights. However, the theoretical system of fundamental rights is not immutable; it continuously evolves in interaction with social realities, especially technological developments. Initially, fundamental rights focused on protecting personal freedom and property security from direct state intervention, but today, these threats no longer come solely from the state but also from increasingly complex social relationships and advancing scientific and technological capabilities (
Sun 2019;
De Gregorio and Radu 2022).
Privacy rights epitomize this evolution. Initial privacy conceptualizations centered on the sanctity of private domains (particularly the home), and individuals’ freedom from external intrusion within these demarcated spaces. Samuel D. Warren and Louis D. Brandeis noted in the late 19th century that the rapid development of photography and the press increased the risk of private life exposure. They advocated for a “right to be let alone” because existing legal protections were insufficient to address these new threats (
Gasser 2016). By the mid-20th century, William L. Prosser had categorized privacy violations into four categories: intrusion upon seclusion or private affairs, public disclosure of embarrassing private facts, publicity placing someone in a false light, and commercial use of another’s name or likeness. This typological analysis helped concretize and implement privacy rights (
Solove and Richards 2010). Later, computer technology exponentially increased the ability to collect, store, process, and transmit information in the latter half of the 20th century, allowing government agencies and commercial organizations to collect massive amounts of personal data. This structural transformation shifted the nature of threats to individual rights from physical intrusions to the data power imbalance between individuals and the state or large firms (
Froomkin 2000). Personal information like location, consumption, and social relationships could be digitized and integrated for analysis, making people seem like they were living in a transparent glass house—this new risk required privacy rights theory development and paradigm shift.
Against this backdrop, privacy rights expanded from physical to virtual space. In 1967, Alan F. Westin presciently proposed that privacy rights are based on individuals’ ability to control their own information—that is, to autonomously decide when, how, and how much information about themselves is disclosed. This theoretical shift from the right to be left alone to the right to information control emphasized individuals’ status as active subjects in information processing. It shifted privacy protection from defending against external interference to regulating information flow (
Rouvroy and Poullet 2009). Westin’s idea profoundly influenced subsequent legal practices, especially the right to informational self-determination developed in German constitutional court cases, which is essential to individual freedom and dignity, elevating information control to a fundamental right (
Schwartz 1989). Westin’s theory helped create Fair Information Practice Principles (FIPPs), which became the foundation of global data-protection laws. These principles became widely known through promotion by international organizations, and typically include collection limitation, data quality, purpose specification, use limitation, security safeguards, openness, individual participation, and accountability. FIPPs changed data protection from abstract rights declarations to concrete behavioral norms and procedural designs, providing a blueprint for operable data-protection legal frameworks. As a result, FIPPs has greatly influenced data-protection laws, including GDPR (
Naef 2023).
It is noteworthy that from the late 19th century to the early 21st century, the aforementioned discussions predominantly revolved around American scholars, reflecting the US’ long-standing position as the primary site of technological innovation, particularly as the birthplace of both computers and the Internet, where related challenges were confronted earlier. China had a very late start in privacy rights, resulting in a relatively weak cultural foundation in this area. It was not until 2009 that privacy was established as a fundamental right, and only in the 2017 General Provisions of Civil Law was it explicitly incorporated into the category of personality rights. This corresponds with China’s economic and technological development, which lagged behind for an extended period before experiencing rapid growth over the past two decades.
3.2. Technical Characteristics of AI and Rights Dilemmas in the AI Era
From privacy to data rights, legal systems have strived to delineate rights boundaries and establish behavioral norms. However, AI’s development involves qualitative changes in learning, decision-making, and human autonomy, not just quantitative changes in data processing speed and efficiency. AI can autonomously learn and predict from massive data, and make decisions or take actions, surpassing traditional information systems in complexity, adaptability, and influence (
Jordan and Mitchell 2015;
LeCun et al. 2015). For this reason, the widespread use of AI, especially in law enforcement and justice, poses multidimensional, deep-seated challenges to existing fundamental rights theories based on human behavioral logic (
Bakiner 2023).
AI generally suffers from opacity or algorithmic black box issues. Technology complexity, model training randomness, or deliberate strategic ambiguity may cause these issues (
Janssen et al. 2022). As
Pasquale (
2011) observed, institutions initially emphasize system transparency and objectivity to allay concerns, but over time, system controllers’ needs for security and avoiding manipulation have caused secrecy to triumph over transparency. Especially in advanced AI, millions or billions of parameters make their internal logic extremely complex, making it difficult even for designers to fully explain their decision-making processes (
Linardatos et al. 2021). This reveals a dilemma: how to ensure the transparency and explainability of AI decisions while pursuing efficiency and automation. If the law requires a reasonable explanation for certain actions, but the AI making that decision is like a black box, it will be difficult to provide or review the explanation.
Without understanding how their personal data is analyzed and used, people cannot achieve “informed consent,” let alone “data control.” Unlike traditional data processing, AI can generate new personal data through large-scale inference based on personal data that users have consented to collect. This capability suggests that existing consent mechanisms may be inadequate. When AI infers sensitive data not explicitly authorized by users, how can users meaningfully exercise rights over this new data? Particularly in law enforcement and justice, if people affected by AI decisions cannot know the main basis and logic of those decisions, they may find it difficult to seek effective remedies, depriving them of their due process rights (
Cheong 2024).
The reliance of AI on massive data and its associative analysis capabilities erodes two core principles: data minimization and purpose limitation. Data minimization requires data collection to be limited to the minimum scope needed to achieve specific goals, but AI effectiveness often correlates with data volume. To discover unknown areas, system controllers tend to collect as much data as possible. Generative AI has exacerbated this challenge (
Sonboli et al. 2024;
Ganesh et al. 2024), and its inference capability is even more disruptive. Algorithms can infer sensitive information by analyzing seemingly unrelated or non-sensitive data. This means that even if purpose limitation is followed during data collection, the system may generate and use highly sensitive information in subsequent operations. This “data alchemy” makes “function creep” a norm (
Koops 2021), meaning that data initially collected for specific purposes may be easily used for other purposes after algorithmic processing, making “informed consent” illusory in many scenarios.
Additionally, from decision support systems that provide references and assist in judgment, to fully automated systems that can independently make decisions, AI is increasingly replacing human decision-making positions in various fields (
Colback 2025). This trend may increase efficiency and lower costs, but it raises concerns about responsibility attribution, biases, and lack of human oversight (
BaniHani et al. 2024;
Booyse and Scheepers 2024). Research indicates that as dependence on automated systems strengthens, humans develop an automation bias, tending to adopt AI opinions even in the face of other contradictory information (
Kazim and Tomlinson 2023). In addition, long-term reliance on AI assistance may also lead to the deskilling of professionals, weakening their ability to complete tasks independently (
Shukla et al. 2025).
AI is not value-neutral; its decisions are influenced by multiple factors such as data, design, and scenarios. What makes algorithmic discrimination concerning is not only the potential injustice but also its high level of covertness (
Pasipamire and Muroyiwa 2024). The algorithms’ complexity and scientific facade make it difficult for users and regulators to identify their biases. This differs from explicit discrimination based on identity characteristics; algorithmic discrimination often manifests as statistical differences (
Zuiderveen Borgesius 2020), with more complex causal chains, posing new questions to existing anti-discrimination legal frameworks. Furthermore, defining and implementing fairness in algorithm design is itself both an ethical and technical challenge because fairness has many definitions across disciplines (
Trigo et al. 2024). Even within the AI field, mathematized fairness standards often have internal conflicts, making them difficult to satisfy simultaneously and requiring difficult value trade-offs in specific contexts (
Søgaard et al. 2024;
Jie Xu et al. 2022).
3.3. Paradigm Shift in Rights Protection: Algorithmic Governance and Procedural Justice
Traditional rights frameworks primarily focus on individual consent to data collection. However, AI’s ability to analyze massive, complex, and dynamic data far exceeds human comprehension, making it difficult for individuals to assess the resulting risks accurately. Harms potentially caused by AI are also difficult to address through individual rights claims (
Taeihagh 2025). To face these challenges, there is a wide consensus that the fundamental rights-protection paradigm should be adjusted. The goal is to develop an institutional framework that upholds existing values while adapting to AI’s characteristics, which will lead to more effective protection. This implies that while the core values of fundamental rights remain constant, their implementation mechanisms, responsibility allocation, and governance priorities must adapt to AI’s unique characteristics. (
Cheong 2024).
Algorithmic governance and due process are precisely the core threads in current theoretical responses to AI challenges. The essence of algorithmic governance lies in shifting the focus of rights protection toward the entire lifecycle of AI systems—their design, development, deployment, and operation—achieving systematic, risk-oriented regulation (
Danaher et al. 2017). This paradigm shift is manifested at multiple levels. First, the object of governance is no longer merely data, but rather the algorithm and the entire system in which it is embedded as the core of governance. Second, the main responsibilities and obligations are allocated more to organizations that develop, deploy, and use AI, emphasizing that they should take on the active responsibility of ensuring system compliance, safety, fairness, and transparency, rather than primarily pushing risks onto individual users (
Radanliev 2025). Additionally, algorithmic governance advocates adopting differentiated strategies based on risk assessment, that is, imposing regulatory requirements of different intensities and types according to the specific scenarios of AI applications and their potential risk levels to fundamental rights, in order to achieve effective allocation of governance resources (
Grimmelikhuijsen and Meijer 2022).
If algorithmic governance provides a macro, systematic risk-management framework, then algorithmic due process focuses on the micro level, creatively applying the procedural justice values cherished by traditional rule of law to algorithm-dominated decision-making processes (
Cheong 2024). When algorithms are used to make decisions that significantly impact individual rights or obligations, the requirements for procedural safeguards should not be reduced or exempted simply because technical factors have been introduced into the decision-making process; on the contrary, more detailed and strengthened procedural arrangements may be needed. Specifically, among the key elements of algorithmic due process, transparency enables individuals to review data usage and identify potential privacy violations or biases. Explainability allows individuals to understand how their data contributes to specific decisions, thereby helping to identify errors or misuse in the data. Accountability ensures that organizations are responsible for protecting personal data in their algorithmic systems and are held accountable for violations or abuses. Contestability allows individuals to seek correction or remedy when they believe algorithmic decisions are based on inaccurate, unfair, or rights-infringing data (
Kinchin 2024;
Rubim Borges Fortes 2020). These elements form a checks and balances system to protect individual data rights throughout algorithmic decision-making.
Of course, procedural fairness cannot completely substitute for substantive fairness. To effectively govern issues of bias and discrimination in algorithms, substantial considerations of fairness and non-discrimination principles must be embedded within the algorithmic-governance framework (
Khazanchi and Saxena 2025). As
Malgieri and Pasquale (
2024) argue, companies must demonstrate that their AI meets clear requirements for security, non-discrimination, accuracy, appropriateness, and correctability before deployment. Therefore, specific measures may include mandating comprehensive fairness audits before algorithm development and deployment to identify and assess potential discriminatory risks; requiring the use of representative, high-quality training data, or employing technical means to mitigate the impact of data bias; carefully selecting feature variables and optimization objectives in algorithm design to avoid inadvertently introducing or amplifying discrimination; conducting continuous monitoring and evaluation of deployed systems to discover and correct potentially discriminatory consequences promptly; and enhancing the transparency of algorithmic fairness-assessment standards and practices, explaining to the public efforts made to promote fairness and existing limitations. These specific rules map out the essence of algorithmic governance, which, as scholars define it, is a rule-based social order that relies on coordinated cooperation among multiple subjects and incorporates particularly complex computer cognitive programs (
Gritsenko et al. 2022).
In summary, facing the systematic challenges brought by AI, fundamental rights-protection theory is undergoing a profound paradigm shift. The core trend is a shift from previously emphasizing individual defense and passive response toward greater emphasis on systematic prevention, whole-process governance, and shared responsibility among multiple subjects. The macroscopic framework of algorithmic governance and the microscopic safeguards of algorithmic due process constitute this theoretical response’s main content, aiming to anchor the value coordinates of individual rights in complex algorithmic environments, ensuring that technological development can better serve human dignity, freedom, and fairness.
3.4. Typical Problems in Algorithmic Decision-Making
The previous sections briefly introduced that although modern algorithmic decision-making systems attempt to enhance efficiency and coverage, they simultaneously expose a series of typical problems. Their technical and institutional causes may easily trigger significant rights conflicts and legal risks. This section focuses on key issues, including black boxes, decision biases, responsibility attribution, and human intervention.
As discussed previously, algorithmic decision processes often lack transparency (black boxes). On the one hand, machine learning algorithms, especially deep-learning models, are required to process nonlinear relationships between at least thousands of features in high-dimensional spaces. This complexity makes algorithms inherently opaque, not merely because developers are unwilling to explain them, but because the internal logic of these systems exceeds the range of direct human comprehension (
Linardatos et al. 2021). On the other hand, developers or operators typically view algorithms as core assets and refuse to disclose how their algorithms reach conclusions under the pretext of protecting trade secrets or system security (
Pasquale 2011). This information asymmetry prevents individuals affected by decisions from obtaining sufficient information to question or appeal, which is a primary cause of public distrust. When these issues arise in administrative or judicial domains, they may infringe upon citizens’ due process rights (
Radanliev 2025).
Furthermore, algorithms rely on historical data for learning and decision-making. If the training data itself contains biases, algorithms may internalize these data biases as decision biases. Simultaneously, the subjective tendencies of algorithm designers in selecting features and objective functions may covertly inject their value preferences into the model (
Z. Chen 2023). More seriously, algorithms may form self-reinforcing feedback loops. Systems make decisions based on biased data that influence the real world, and these results in turn become data for future algorithm training (
O’Donnell 2019). Even when sensitive labels such as gender, race, or sexual orientation are excluded, algorithms can still indirectly introduce biases through proxy variables such as postal codes. These technical and human factors cause algorithms to potentially lack respect for human diversity and provide discriminatory treatment to certain groups based on statistical characteristics (
Cossette-Lefebvre and Maclure 2023), threatening the values of equality and personal dignity protected by law.
Algorithmic systems are typically developed, designed, and operated by multiple organizations. Multi-entity participation leads to the “many hands problem” and creates responsibility gaps, making it difficult to determine how much responsibility specific individuals or organizations should bear (
Horneber and Laumer 2023). When algorithms cause errors and harm, technical complexity makes causal chains difficult to clarify, while legal gaps provide relevant entities space to evade responsibility. For example, in the COMPAS case, responsibility may be dispersed among the company that developed the algorithm, the court that deployed the system, and the judge who made the final decision, with no single entity being held accountable for the consequences of algorithmic bias. In the Robodebt and SyRI cases, responsibility was dispersed among numerous government departments and officials, with policy promoters primarily bearing political responsibility related to elections (
Rinta-Kahila et al. 2024;
Grimmelikhuijsen and Meijer 2022). Corporations also frequently use technological neutrality as an excuse, claiming they have exercised a reasonable duty of care, thereby attempting to exempt themselves from responsibility. This unclear attribution of responsibility makes effective accountability and remediation difficult to maintain; when rights are violated yet no one is held responsible, the fairness and deterrent effect of the law is severely compromised.
Objectively, before a policy is implemented, there are often multiple opinions representing different stakeholders, but it is a duty to discover and correct biases promptly during the specific implementation process. However, as the autonomy of algorithmic systems increases, traditional bias correction mechanisms become difficult to apply (
Cossette-Lefebvre and Maclure 2023). If biases that must be corrected are discovered in algorithms or models during implementation, sometimes retraining is even required to resolve these issues. This may cause policy promoters to continue pushing the system to conceal or avoid responsibility even when they are aware of system defects. Therefore, algorithms must leave space for meaningful human intervention. As
Santoni de Sio and van den Hoven (
2018) said, algorithms should be designed with embedded tracking and tracing mechanisms to ensure systems can respond in real-time to human input based on moral and practical considerations, and clearly associate responsibility for each key decision with specific participants, thereby always preserving substantial and effective space for human intervention and accountability.
The typical problems above have different emphases, from technical mechanisms to institutional design, yet they collectively point to the deep conflicts between algorithmic decision-making and legal rights. These problems are characterized by cross-domain and cross-entity complexity that neither the market nor government can effectively address alone, requiring regulation through institutional governance approaches.
4. Legal Responses and International Practices
Facing the challenges that AI poses to fundamental rights, countries around the world are actively exploring corresponding legal and policy responses. These responses take diverse forms, reflecting differences in legal traditions, governance philosophies, economic considerations, and social priorities across different jurisdictions. However, their common goal is to attempt to set ethical boundaries and legal tracks for the development of this transformative technology, striving to mitigate its potential impact on individual rights and social structures. This section will explore the major regulatory frameworks around the world, including the US, which maintains an overall leading position in AI; the EU, which provides the most data law theories; traditional technology powerhouses Japan and South Korea with their semiconductor advantages; as well as China.
4.1. The EU’s Systematic Regulation: A Rights-Based Risk-Governance Paradigm
Globally, countries are actively exploring how to address the opportunities and challenges brought by AI. The EU has explicitly positioned itself as the architect of a gold standard for AI regulation, constructing a comprehensive legal framework anchored in fundamental rights protection and structured around risk-prevention principles. The core of the EU’s AI regulatory strategy lies in two laws: the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA). As a pioneer in data protection, GDPR has become an important reference standard for legislation worldwide. The personal data processing principles it establishes (lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, confidentiality, and accountability) provide a foundational legal basis for addressing challenges brought by AI. As the world’s first comprehensive AI law, the AIA forms a complementary relationship with GDPR and aims to address the multidimensional risks of AI systems themselves more directly. While the AIA primarily follows a product safety law framework (
Ebers 2024), it maintains important connections with GDPR in protecting fundamental rights. This act categorizes AI systems into four risk levels: unacceptable, high-risk, limited risk, and minimal risk, and applies differentiated regulatory measures to different risk levels.
Article 22 of GDPR introduced an important innovation, establishing a basic framework for AI-driven automated decision-making. Although this provision was not initially intended to regulate AI, it was the first to explicitly stipulate that individuals have the right not to be subject to decisions based solely on automated processing (with exceptions), and enjoy rights to express opinions, challenge decisions, and request human intervention from data controllers, providing an innovative mechanism for addressing challenges such as algorithmic bias and lack of transparency (
van Kolfschooten 2024). This is widely viewed as early legislative confirmation of algorithmic due process. Article 14 of AIA further strengthens the focus on automated decision-making by requiring meaningful human oversight or intervention for high-risk AI systems. Notably, according to Recital 45 of the AIA, any actions governed by EU law remain valid despite the implementation of the AIA, and Recital 69 further establishes that core data-protection principles outlined in Union data-protection law apply to all personal data processing activities. Therefore, GDPR continues to apply whenever personal data is processed in the context of AI. A study focused on social security welfare systems shows that even with the introduction of the AIA, GDPR may still be an important legal basis for automated decision-making by public sectors, including social security departments (
Enqvist 2024).
Regarding “meaningful human intervention,” the European Data Protection Board’s guidelines clearly state that it refers to having the power and capability to change decisions, and warn against attempting to circumvent relevant regulations through superficial human involvement (
Lazcoz and de Hert 2023). GDPR also stipulates that preventive measures must be taken to protect personal data, among which Data Protection Impact Assessment (DPIA) and privacy by default principles are key elements, crucial for the responsible development and deployment of AI systems. DPIA is a requirement under Article 35 of GDPR; when data processing is likely to pose high risks to individual rights and freedoms (such as large-scale monitoring, sensitive data processing, automated decision-making, etc.), data controllers must conduct a DPIA. Notably, synergies exist between DPIA and the Fundamental Rights Impact Assessment (FRIA) of the AIA, as both frameworks share interconnected questions that feed into each other. This complementarity spans multiple dimensions, including stakeholder responsibilities, data processing practices, security measures, and risk mitigation strategies, allowing for more comprehensive protection when the assessments are conducted in coordination (
Thomaidou and Limniotis 2025;
Mantelero 2024). Article 25 of GDPR stipulates privacy by design and privacy by default principles, emphasizing the integration of data-protection measures into systems from the outset, requiring that systems must not collect, use, or share users’ personal data without their explicit consent.
Evidently, the prominent features of the EU model are its systematic nature and forward-looking approach. The systematic nature is reflected in the comprehensiveness of the regulatory architecture; GDPR and AIA not only focus on personal data protection but also comprehensively consider AI’s impact on fundamental rights, democratic values, and rule of law values, constructing a whole-process governance system from standard setting and prior assessment to continuous supervision. The forward-looking approach is manifested in the EU’s strategy of proactive regulation rather than passive response, attempting to build clear legal boundaries before the technology is widely applied, guiding technology development in directions aligned with its values (
Christou et al. 2025). Although the EU model also faces numerous challenges, including high regulatory costs, differences in implementation strength across countries, restrictions on cross-border data flows, and potential hindrances to innovation, its human rights-based systematic regulatory approach provides a valuable experiential reference for balancing technological innovation with the protection of fundamental rights.
4.2. US Decentralized Regulation: Market-Driven and Industry Self-Regulation
Unlike the EU’s comprehensive, centralized model, the United States has so far taken a decentralized, industry-led approach to AI regulation. There is no unified federal AI law; instead, the U.S. relies on a patchwork of existing laws, agency guidelines, and state initiatives (
Davtyan 2025), except for the provision in the One Big Beautiful Bill that prohibits states from regulating AI. In practice, over 30 AI-related bills were introduced in Congress in 2023 alone, yet none has passed. Although the Biden administration released a Blueprint for an AI Bill of Rights, proposing five core principles, including safe and effective systems, algorithmic discrimination protection, data privacy protection, notice and explanation, and human alternatives and fallback mechanisms, this blueprint is merely guidance and lacks legal binding force. This model fully reflects the US policy preference for innovation-friendly and market-led approaches, emphasizing regulatory flexibility and seeking to avoid excessive government intervention (
Parinandi et al. 2024).
Many existing US federal laws, such as the Health Insurance Portability and Accountability Act, Fair Credit Reporting Act, and Children’s Online Privacy Protection Act, are being used to regulate AI applications within specific industries. These laws were not designed initially to regulate AI, but they are being adapted and interpreted to address issues such as data protection and algorithmic bias brought by AI. For instance, agencies like the Federal Trade Commission, Equal Employment Opportunity Commission, and Consumer Financial Protection Bureau are using their existing legal authorities to oversee the use of AI (
Oxford-Analytica 2024). While this decentralized legislation can formulate more precise rules based on the characteristics of specific industries, it also creates fragmentation in legal application and inconsistency in protection standards, particularly in cross-domain AI applications.
In recent years, US states have also begun to explore local legislation actively. For example, California’s Consumer Privacy Act, passed in 2018, and the Privacy Rights Act, passed in 2020, are similar to the EU’s GDPR in the field of data protection; New York’s Automated Employment Decision Tools Law, passed in 2021, requires bias audits for AI systems in the employment field. These local legislative innovations provide valuable experience for federal-level policy discussions and have also formed a certain “California effect,” wherein businesses operating across regions often adopt the laws of the strictest jurisdiction as their standard to avoid complex compliance procedures. This decentralized regulatory model has its unique advantages but also faces challenges such as unbalanced protection, insufficient certainty, regulatory fragmentation, and high compliance costs. Some scholars (e.g.,
Parinandi et al. 2024) warn that this “laboratories of democracy” approach risks a confusing array of 50 different AI regimes, leading to legal uncertainty and elevated compliance costs for multi-state operators. Industry self-regulation is another important component of the US model. Many technology companies and industry organizations have established their own AI ethical guidelines and practice guides. Additionally, the “AI Risk Management Framework” released by the National Institute of Standards and Technology provides an important reference for the industry. Although these soft norms lack mandatory enforcement power, they positively guide industry practices and shape ethical consensus.
There has long been disagreement over whether AI regulation requires more unified, comprehensive federal legislation, and this divide has become more pronounced against the backdrop of widespread generative AI adoption. For instance, Google has always claimed that AI needs to be regulated but opposes the creation of broad, horizontal laws. They believe regulation should be based on specific AI applications and actual risks, focusing on outputs rather than processes, filling gaps in existing laws rather than rewriting legislation, and empowering existing regulatory agencies rather than creating new ones (
Walker 2024). Google and Meta frequently criticize European regulation, believing the EU model hinders innovation (
Browne 2025). Especially under the Trump administration’s push, Meta is preparing to terminate third-party fact-checking, replace human review with AI, and test facial recognition in account recovery (
Kaplan 2025). As a de facto leader in generative AI, OpenAI positions itself as a key participant in broader competition with China. It frequently cites national security to emphasize the importance of establishing unified and relaxed federal regulation. OpenAI strongly opposes states (especially California) from legislating independently and proposes establishing a framework for voluntary cooperation with the federal government (
Ghaffary 2024). Microsoft advocates for the government to play an important role in AI regulation, supporting a licensing system for developing and deploying high-risk AI models (
Smith 2023).
As reflected in statements from high-tech firms (e.g.,
Walker 2024;
Kaplan 2025;
Ghaffary 2024;
Smith 2023), the evolving US AI-governance paradigm exhibits a distinct “US model” that remains a work in progress. Rather than adopt a one-size-fits-all framework, many leading stakeholders favor risk-based, sector-specific rules that target high-risk applications and build on existing legal structures. This model is characterized by deep corporate involvement and prioritization of innovation and economic competitiveness, but it also raises concerns about whether it places too much emphasis on corporate opinions while neglecting the protection of ordinary citizens’ interests. Some scholars (e.g.,
Lancieri et al. 2025) caution that heavy reliance on industry self-policing and voluntary guidelines risks regulatory capture, where corporations shape rules to protect their interests at the expense of broader public safeguards. In stark contrast to the EU’s more comprehensive, rights-protection-focused centralized regulatory approach, the US model reflects profound differences in value orientations and regulatory philosophies even within democratic societies. Notably, in the One Big Beautiful Bill Act passed in May 2025, states are prohibited from enacting virtually any form of AI regulation for ten years, adding even more uncertainty to an already convoluted U.S. model.
4.3. Personal Information-Protection Laws in Northeast Asia
The Northeast Asian region, as an important engine for global AI technology development and application, has shown active legislative practices and unique governance ideas in the fields of data protection and AI governance in recent years. Based on their respective legal traditions and development stages, China, Japan, and South Korea have formed regulatory frameworks that share commonalities while maintaining distinctive characteristics, providing important reference cases for global AI governance.
As a traditional technology powerhouse, Japan enacted the Act on the Protection of Personal Information (APPI) as early as 2003. However, the initial APPI had significant limitations, particularly in that it only applied to businesses that had processed the personal information of at least 5000 individuals in the previous six months. This threshold left some entities that processed important data but were of a smaller scale unregulated by law. To modernize the data-protection system and align with international standards (especially GDPR), Japan has made several important revisions to APPI. The 2015 revision (effective in 2017) was a key step in this direction. This revision established the Personal Information Protection Commission (PPC) responsible for supervising businesses related to personal data and required the agency to review APPI every three years to ensure it adapts to technological and social changes. Additionally, the revised APPI expanded its scope to include overseas entities that process personal data related to subjects in Japan (
Lim and Oh 2025).
In 2019, Japan released Human-Centered AI Social Principles, proposing three major concepts, including human dignity, diversity and inclusivity, and sustainability, as well as corresponding seven basic principles, which contain both guarantee requirements such as privacy and security, and supporting mechanisms to promote positive AI applications such as education, fair competition, and innovation (
Habuka 2023). The 2020 revision further distinguished between “anonymized data” and “pseudonymized data,” aiming to protect data privacy while establishing legal pathways for mining data value. This revision also introduced stricter violation penalties and mandatory data breach reporting systems. The most recent revision occurred in 2023, integrating two other personal information-protection laws applicable to administrative agencies and independent administrative corporations, achieving a three-in-one integration and placing data protection for both public and private sectors uniformly under PPC’s jurisdiction.
The South Korean government enacted the Personal Information Protection Act (PIPA) in 2011, which, together with the Act on Promotion of Information and Communications Network Utilization and Information Protection (primarily regulating online service providers’ processing of personal information) and the Credit Information Use and Protection Act (primarily regulating financial institutions’ collection, use, and protection of credit information), constitutes South Korea’s data-protection legal framework. When PIPA was amended in 2015, a punitive damages system was introduced to address increasingly serious personal information leakage issues, and sanctions against personal information crimes were strengthened. PIPA was amended again in 2016, enhancing provisions such as data subject self-determination rights. However, due to the lack of an independent regulatory body and limited scope of protection, these two amendments did not enable South Korea to pass the EU’s adequacy assessment (
Lim and Oh 2025). Therefore, South Korea made significant amendments to the aforementioned three laws in 2020, introducing the right to data portability and the right to be forgotten, establishing the Personal Information Protection Commission as an independent regulatory body, and unifying relevant rules for online and offline businesses.
Although South Korea ultimately passed the EU adequacy assessment through this legislative amendment, it remains controversial, especially compared to GDPR rules; PIPA’s level of protection for data subjects appears somewhat insufficient in many aspects. For example, scholars like
Kim and Park (
2024) criticize that PIPA does not grant individuals the right to refuse fully automated decision-making, only establishing the right to object. In December 2024, South Korea passed the Framework Act on AI Development and Building Trust, becoming the world’s second country to pass an AI bill. This bill mainly focuses on building a governance system, supporting industrial development, and preventing potential risks, and stipulates that a basic plan aimed at enhancing AI competitiveness be formulated and implemented once every three years. The Korean government plans to complete the development of supporting regulations and guidelines in the first half of 2025 to ensure smooth implementation of the law in 2026.
Mainland China has accelerated the development of laws related to data protection and AI governance in recent years. The Personal Information Protection Law (PIPL) and Data Security Law implemented in 2021, along with the Cybersecurity Law of 2017, together constitute the basic legal framework for data governance. PIPL establishes basic principles for personal information processing, including lawfulness, propriety, necessity, and good faith, and grants individuals the right to know, right to decide, right to refuse, right to query and copy, right to correct, right to delete, right to portability, and the right to refuse fully automated decision-making. It is generally considered that PIPL has many consistencies with GDPR in its expression, even having a considerable degree of correspondence. The differences between PIPL and GDPR stem more from differences in social systems. For example, PIPL does not directly reflect concepts such as personality freedom and informational self-determination, and the specific content and exercise methods of the aforementioned rights need to be determined in conjunction with the Civil Code, administrative regulations, and other normative documents (
Yao 2022).
The Cyberspace Administration of China (CAC) plays a key role in China’s AI regulation. In 2022, CAC issued the Provisions on the Administration of Algorithm Recommendation for Internet Information Services (PAAR), which aims to regulate algorithm recommendation activities, protect users’ rights to know and choose, and prevent algorithmic discrimination, big data-enabled price discrimination, and other issues. Notably, CAC also requires establishing algorithm filing and security-assessment systems, mainly targeting service providers with public opinion attributes. Subsequently, in 2023, CAC issued the Interim Measures for the Management of Generative AI Services. In addition to requiring legal sources of training data, prohibiting intellectual property infringement, and ensuring content authenticity, accuracy, and diversity, these measures also require service providers to ensure that AI-generated content conforms to core values and does not endanger national security and public interest. It can be seen that the Chinese model has a strong government-led character.
From the above, it can be seen that the data protection and AI-governance experiences in the Northeast Asian region exhibit some common characteristics: active alignment with GDPR, centralized and unified regulatory frameworks, and the government‘s active role in governance. At the same time, there are also significant differences among the three countries in specific governance paths and focus areas, reflecting their respective political system backgrounds.
4.4. International Organizations’ Initiatives and New Trends in Transnational Governance
Given AI’s globalized characteristics and universal impact on fundamental rights, coordination, and governance beyond national boundaries have become an inevitable trend. Multiple international organizations actively build consensus, establish norms, and promote cooperation, attempting to provide a common set of ethical guidelines and cooperation frameworks for global AI governance.
The Recommendation on the Ethics of AI released by the United Nations Educational, Scientific and Cultural Organization (UNESCO) in 2021 is the world’s first AI ethical normative document unanimously adopted by all member states. This recommendation emphasizes a human-centered approach, respecting human rights and freedoms, promoting environmental and ecosystem sustainability, and ensuring inclusivity and diversity when developing and using AI systems (
UNESCO 2021). UNESCO further emphasized these values in its AI Competency Frameworks for Students and Teachers, released in 2024, arguing that education systems should not only equip students with AI knowledge and skills but also understanding of technology’s potential impacts on society and environment (
UNESCO 2025). UNESCO particularly emphasizes that discrimination or bias should be avoided throughout the entire lifecycle of AI systems, and effective remedies should be provided for discriminatory and biased algorithmic decisions.
The AI Principles adopted by the Organization for Economic Co-operation and Development (OECD) in 2019 is the first intergovernmental framework on AI, aimed at promoting trustworthy AI that respects human rights and democratic values. In response to challenges such as intellectual property and misinformation emerging in the era of generative AI, the OECD revised these principles in 2024. The updated framework emphasizes vigilance against risks of technology misuse, acknowledges the dynamic nature of data governance and the need for cross-jurisdictional interoperability, and places greater focus on whole-process, full-lifecycle AI governance. It also refines the division of responsibilities among stakeholders. Around trustworthy AI, OECD has constructed five dimensions: inclusive growth and sustainable development, respect for human rights and democratic values, transparency and explainability, technical robustness and security, and full lifecycle accountability (
Russo and Oder 2023). These principles have received broad international recognition, including from the G20.
Specialized international organizations such as the World Intellectual Property Organization (WIPO) and the International Telecommunication Union (ITU) have also proposed policy recommendations and governance solutions for specific AI issues from their respective professional fields. WIPO recognizes that AI has already challenged traditional concepts such as innovation and creation, requiring member states to promptly inform it of any new policies regarding AI and intellectual property, and has organized several dialogues on intellectual property and AI to discuss new challenges posed by the rapid changes in AI to intellectual property policies. The International Telecommunication Union (ITU) focuses on AI standardization and technical specifications to ensure that AI can benefit society positively. ITU recognizes the risks brought by uncontrolled AI, such as misinformation and bias, and emphasizes the importance of transparency and accountability. To this end, ITU has developed over 120 AI standards (
Lamanauskas 2025).
Overall, the main challenges facing AI transnational governance include differences among countries in values, regulatory philosophies, technological capabilities, and levels of economic development; tensions between national sovereignty and cross-border data flows; digital divides resulting from global inequalities in AI development; and issues of binding force and implementation power in AI-governance frameworks led by international organizations. However, in the face of the global impact of AI, strengthening international collaboration has become an inevitable trend.
5. China’s Legal Implementation Paths and Governance Strategies
The preceding sections examined various AI-governance practices. These diverse experiences provide valuable references for improving China’s AI governance. When facing rights challenges brought by AI, different regions have demonstrated distinct response strategies based on their legal traditions and development needs: the EU emphasizes systematic risk regulation, the US highlights the role of existing laws and industry self-regulation, while Japan and South Korea incorporate international experiences while making localized adjustments. Despite their different paths, these practices exhibit some common trends, such as shifting from merely protecting personal data to whole-lifecycle governance, ex-post remedies to ex-ante prevention, and single-subject responsibility to multi-stakeholder shared responsibility. These international experiences offer beneficial references for China’s construction of an AI-governance system, especially regarding institutional design in key areas such as algorithm impact assessment, human oversight mechanisms, and rights remedy channels.
While the preceding survey has mapped out a rich spectrum of AI-governance models—from the EU’s rights-based, risk-tiered approach, to the US’s decentralized, market-driven framework, to the Northeast Asian blend of centralized regulation and soft-law initiatives—what proves effective in one jurisdiction cannot be simply transplanted wholesale into another. Legal traditions (common law vs. civil law), institutional architectures (federal vs. unitary; market-led vs. state-led), political priorities (individual rights vs. social harmony), and cultural norms (privacy-as-autonomy vs. privacy-as-social trust) all shape not only the letter of regulatory texts but also how they are enforced and socially accepted. Building on this contextual nuance, this section explores legal implementation paths that can both promote innovative development and effectively protect citizens’ fundamental rights under China’s specific institutional and cultural landscape.
5.1. Structural Challenges Facing China’s AI Governance
China’s rapid development in AI has attracted worldwide attention, with AI technology now widely permeating every corner of the economy and society. However, as the depth and breadth of technological applications continue to expand, the potential risks and challenges that AI poses to fundamental rights are similarly becoming increasingly prominent in the Chinese context. Due to specific social and institutional backgrounds, AI may even present more complex or unique dimensions.
Although China has made certain progress in constructing an AI legal framework, the imbalance between technological application and governance levels remains prominent. The speed of AI application has far outpaced the rate of relevant legal norm adjustments, not only in the three domains discussed in this paper (judicial assistance, predictive policing, and social welfare), but also in many other fields (
Huang et al. 2024). For example, not only in social security departments, but when numerous merchants also begin automatically analyzing customers through high-precision cameras and facial recognition technology in public places, do public departments really have the capacity to help ordinary people who unintentionally pass by but whose facial features have already been collected by merchants to defend their rights? What exactly are the standards and legal boundaries for transparency and explainability of algorithm-assisted decisions? Although current laws have already enumerated various rights of data subjects, most remain at the theoretical level, lacking operational guidance.
Insufficient expertise is another important challenge currently faced. AI’s complexity and specialized nature means that effective regulation requires highly specialized capabilities and tools. For a long time, China has been in the position of a technology-development follower, focusing on developing the technology itself, while research and practice in AI ethics, algorithm auditing, and impact assessment remain relatively insufficient. The lack of unified and professional standards, as well as regulatory personnel with corresponding expertise, may affect the effectiveness of AI governance (
Zou and Zhang 2025;
B. Chen and Chen 2024). The lag in regulatory capability has made information asymmetry between developers, users, and regulatory agencies particularly prominent, causing many AI applications to exist in gray areas, posing potential threats to citizens’ fundamental rights (
Zhu and Lu 2025). Therefore, it is necessary to strengthen the professional capacity building of relevant supervisory agencies to ensure they can effectively assess and monitor the compliance and security of AI systems.
The issue of insufficiently systematized rights-protection mechanisms also deserves attention. In China’s governance regime, effective rights protection is affected by regulatory fragmentation across multiple authorities, with overlapping mandates between agencies and unclear division of central versus local responsibilities (
C. Zhang 2024). How to seek a balance between the exercise of public power and the protection of individual rights still lacks detailed regulations in special scenarios such as law enforcement and justice. The current legal framework may focus more on general data protection, with insufficient consideration for specific rights-protection needs (
B. Chen and Chen 2024). When law enforcement or judicial organs use AI for risk assessment and decision assistance, how do parties know whether humans or machines make decisions? How are rights such as the right to be informed, the right to explanation, and the right to object guaranteed? The specific implementation methods and exceptional circumstances of these rights all require more explicit legal provisions.
5.2. Improving the Rights-Protection Legal Framework in the AI Era
Building a comprehensive system for protecting fundamental rights is the foundation for achieving trustworthy AI. China has already established a framework for data rights protection by drawing on GDPR and other international norms. For AI applications in specific high-impact fields, such as law enforcement and justice, further specialized legislation is still needed to address AI-specific risks (
S. Chen 2022). Future legislation regarding AI and data protection should unfold around three aspects: clarifying the connotation of rights, regulating technological applications, and establishing protection mechanisms.
Clarifying the connotation of rights is the primary phase of fundamental rights protection. Legislative activities should combine AI characteristics to define the specific rights content individuals enjoy clearly. For example, the right to transparency enables citizens to know whether they are affected by algorithmic decisions and the basic principles of those decisions (
Högberg 2024). Although existing legal norms provide users some ability to understand algorithmic decision mechanisms, given increasingly complex AI application scenarios, realizing this right still requires more explicit legal provisions as guarantees (
Jian Xu 2024). For instance, although PAAR had already been implemented in 2022, with Article 16 requiring service providers to disclose the principles of their algorithms appropriately, it was not until April 2025 that Douyin (Chinese version of TikTok) finally disclosed the principles of its recommendation algorithm under regulatory pressure, which caused widespread discussion (
Feng 2025). It is worth noting that although some service providers still have not disclosed their algorithm principles, they have already completed the filing procedures with the CAC as required by Article 24. This demonstrates that the existing regulations are very weak in terms of enforcement.
Furthermore, in domestic Chinese discussions, scholars (e.g.,
Z. Zhang et al. 2025;
Wu 2024) frequently mention the lack of specific implementation mechanisms for rights such as the right to explanation (obtaining reasonable explanations for specific algorithmic decisions), the right to human intervention (requesting human review and intervention in important decisions), and the right to challenge (raising objections to unfair or inaccurate decisions). In recent years, as algorithmic systems have become more complex, scholars like
Malgieri and Pasquale (
2024) have recognized that focusing solely on transparency during operation is insufficient to protect rights effectively; it is also necessary to establish ex-ante justification mechanisms requiring developers to demonstrate that their algorithms meet requirements for safety, non-discrimination, and other criteria before deployment. To provide citizens with clear rights bases and protection paths, the definition of these rights and mechanisms should be established through specialized legislation or amendments to existing laws, and not merely through regulations established by a government department.
Regulating AI requires the establishment of a cross-industry regulatory system. Current regulatory activities mainly implement differentiated regulation according to application scenarios and risk levels, especially in sensitive public-sector uses. For high-risk applications, stricter administrative review and supervision standards are specified. However, it is necessary to clarify the scenario limitations further and prohibited areas for AI deployment, delineating the scope where AI systems can and cannot participate in law enforcement or decision-making affecting civil rights. Likewise, detailed procedural norms for data collection and use are needed to ensure the legality of data sources and appropriate usage scopes, and technical safety standards should be set for algorithmic systems (
Al-Maamari 2025). Currently, China has begun implementing tiered management for certain domains, such as large language models, and launched special campaigns targeting typical algorithm issues. These campaigns typically focus on themes such as rectifying “information cocoons” and “big-data-enabled price discrimination” while examining whether algorithms conform to socialist core values.
However, regulation of numerous other AI systems directly impacting citizens’ fundamental rights remains unaddressed—a critical gap that urgently requires exploration. Compared to the EU, China has not introduced unified cross-industry risk classification standards. Chinese regulators tend to implement targeted regulation based on specific industry characteristics. Taking again the filing system for algorithms with public opinion attributes as an example, this focus on public sentiment is often also considered a form of high-risk identification (
Dorwart et al. 2025). In reality, in the Chinese context, the prioritization of specific industries for tiered and classified governance is because high risks in these industries primarily refer to risks posed to authorities., rather than being centered on impacts to natural persons’ rights as in the AIA. Therefore, what truly needs more effective regulation now is not the high-risk AI that might interfere with social control, but rather the potential algorithmic harms that genuinely affect citizens’ fundamental rights. The reason for establishing unified risk classification standards and corresponding measures is that while different industries have varying characteristics, every citizen’s fundamental rights remain consistent.
Establishing protection mechanisms requires constructing multi-level pathways for rights realization. Currently, China has established ex-ante regulatory systems in many digital domains that are nominally called “filing” but effectively function as market entry licenses. These systems are often positively evaluated as enabling the government to track and verify the information before, during, and after deployment, thereby increasing transparency and accountability possibilities (
J. Zhang 2023). However, the licensing-style filing system established by China differs fundamentally from the ex-ante licensing advocated by scholars such as
Malgieri and Pasquale (
2024). In terms of implementation mechanisms, unlike approaches that emphasize specific rules and focus on objective technical standards for legal validation, China’s current ex-ante regulation incorporates significant administrative discretion. Regarding transparency, as mentioned in the Douyin example earlier, although algorithm service providers are obligated to disclose their algorithmic principles and certain details to authorities, the public is restricted from accessing filing information, and no substantial penalties have been imposed on service providers who have long delayed disclosing their algorithmic principles to the public.
In a draft model law prepared by the Chinese Academy of Social Sciences, scholars suggested that China should legislatively confirm “negative lists” for AI research, development, deployment, and other aspects, with projects on these lists requiring prior government approval licenses (
Webster et al. 2023). Considering China’s actual circumstances, in addition to negative lists targeting markets and the public, “power lists” facing administrative departments should also be established. While it may be unrealistic to require the government to issue licenses to itself or presume illegality, AI systems’ rights impact-assessment procedures in public procurement should be further improved. Law enforcement agencies should be required to conduct comprehensive privacy impact and fairness reviews before deploying important AI systems, and assessment results should serve as important bases for procurement and deployment (
Stahl et al. 2023). Furthermore, it is necessary to improve administrative reconsideration and judicial review mechanisms, clarify standards and methods for court review of algorithmic decisions, and enhance effective judicial supervision of AI. Special compensation mechanisms can also be considered to provide timely relief for serious infringements caused by algorithmic decision errors, ensuring that victims receive basic compensation.
In terms of legislative technique, departmental legislation should first be avoided to prevent government departments from expanding their own power through AI, over-relying on automated decision-making, and thereby infringing upon citizens’ fundamental rights. Secondly, although traditionally, the stability of law is considered important, given the rapid development of AI, Japan and South Korea’s approach to establishing comprehensive review mechanisms for AI policies every three or even two years should be learned. Finally, the auxiliary role of soft law should be valued, encouraging industry associations to formulate self-regulatory norms and promoting high-tech enterprises to participate in establishing standards for responsible AI applications.
5.3. Active Algorithm Impact Assessment and Ethical Review
Algorithm Impact Assessment is a structured risk-assessment procedure that systematically analyzes algorithmic decision systems’ potential impacts on individual and public interests. Like environmental impact assessments, Algorithm Impact Assessment emphasizes comprehensively evaluating possible risks and impacts before system deployment and taking appropriate measures to mitigate negative consequences (
Cheong 2024). Many studies suggest that compared to advanced international experiences, China’s standardization and refinement levels in Algorithm Impact Assessment methods and frameworks still need improvement (
X. Zhang 2021). Especially in the law enforcement field, several core dimensions should be focused on, first, privacy impact, needing to assess the system’s potential risks to personal data privacy, including whether the data-collection scope is too broad, whether processing methods are reasonable, whether security measures are adequate, etc.; second, fairness impact, needing to analyze whether the system may produce discriminatory results for different groups, whether there are risks of algorithmic bias, mainly focusing on whether the algorithm’s training data contains historical inequalities and whether these inequalities might be amplified and solidified through algorithmic decisions (
B. Chen and Chen 2024); third, transparency assessment, needing to examine the explainability of AI and the comprehensibility of decision logic, evaluating different stakeholders’ visibility and understanding of the system’s operational mechanisms, ensuring algorithmic decision processes do not become unexplainable black boxes (
Jian Xu 2024); finally, it is necessary to holistically assess the changes AI bring to traditional power relationships and decision processes, as well as these changes’ impacts on various fundamental rights such as due process and freedom of expression.
Ethical review is an important complement to Algorithm Impact Assessment, typically conducted by ethics committees composed of multiple experts who perform ethical reviews of AI research and development projects to identify potential ethical dilemmas and value conflicts, and propose corresponding mitigation measures (
Qiao-Franco and Zhu 2024). In the Chinese context, ethical reviews will inevitably consider China’s unique values and social background, such as the high pursuit of social stability and collectivist principles. China has released a series of AI ethical principles and guidelines; however, many studies point out that these documents may be too vague and fragmented, causing difficulties in interpretation and implementation (
Cao and Meng 2025). At the same time, the intertwining of technological uncertainty, economic-development goals, and global strategic competition also increases the complexity of ethical governance (
Zhu and Lu 2025). Given this, “participatory ethics” proposed by some scholars (e.g.,
Zhang and Pan 2024) seems an option worth attaching importance to. They emphasize that the formulation of ethical review protocols should involve multiple subjects (government, enterprises, academia, the public, etc.) together, following the principles of “co-construction, co-governance, and sharing,” to build a “human-centered” governance framework. Specific recommendations include establishing participatory legislative mechanisms, constructing dynamic ethical assessment systems, strengthening ethical education, introducing “sandbox mechanisms” for regulatory pilots, clarifying the boundaries of rights and responsibilities of all parties, ensuring the transparency and fairness of technological development, and achieving harmonious human–machine coexistence.
Promoting a comprehensive AI-governance ecosystem requires multi-party cooperation. The government should play a leading role, clarifying legal requirements and providing policy support; academic institutions should strengthen research on assessment methods and standards development; technology enterprises should actively cooperate and incorporate assessment or review results into product improvements; social organizations should also participate in supervision and provide independent assessment perspectives. By integrating the rights-protection mechanisms discussed earlier with these collaborative assessment approaches, it is hoped that a virtuous cycle of mutual promotion between technological ethics and legal regulation can be formed, pushing truly trustworthy AI to take root in China.
6. Conclusions
With its powerful capabilities, AI is profoundly changing the face of social governance, especially showing enormous application potential in law enforcement and judicial fields. By examining AI applications in scenarios such as judicial assistance, technology-enabled law enforcement, and welfare supervision, this study reveals that while this technological empowerment enhances efficiency, it also poses unprecedented systematic challenges to citizens’ fundamental rights. Algorithmic opacity threatens due process, potential bias erodes the principle of equal protection, ubiquitous data collection and analysis impact privacy rights and data self-determination, and automated decision-making may weaken human agency and responsibility. These challenges indicate that we cannot simply view AI as technologically neutral tools but must recognize their embedded value orientations and potential power structure influences.
To view these challenges more objectively, this study further traces the evolution of relevant rights theories, pointing out that the theoretical development from an early emphasis on spatial privacy to the information age emphasis on data self-determination, although laying the foundation for responding to technological changes, still struggles to encompass new issues brought by AI fully. AI’s autonomous learning ability, complex decision logic, scaled data dependence, and inference capability require us to transcend traditional rights-protection paradigms, develop more systematic and forward-looking algorithmic-governance frameworks, and creatively apply principles of procedural justice to algorithmic decision environments, implementing algorithmic due process.
On this basis, this study examines legal practices in major countries and regions. The European Union has constructed a comprehensive, systematic regulatory framework based on fundamental rights and risk classification, striving to set benchmarks for global AI governance and actively practicing the concepts of algorithmic governance and algorithmic due process. The United States presents a decentralized, industry-led model emphasizing existing legal application and market self-regulation. The three Northeast Asian countries, while actively drawing on international experience, have incorporated their national conditions, demonstrating diverse governance paths and exploring specialized AI legislation. International organizations play important roles in promoting global ethical consensus and cooperation frameworks. These analyses indicate that global AI-governance landscapes coexist in diversity, with countries exploring how to seek a balance between promoting innovation and safeguarding rights but generally recognizing the need for specialized, more adaptive regulation of AI.
Finally, this study focuses on China’s specific context, analyzing the special challenges it faces while rapidly developing AI, such as governance lag, insufficient professional capacity, and systematic weakness in rights-protection mechanisms in specific scenarios. Based on the aforementioned theoretical discussions and international comparisons, this paper explores an AI-governance path suitable for China’s national conditions. However, AI technology and related legal policies are still rapidly developing and changing, posing challenges to the timeliness of research. This paper mainly focuses on legal and theoretical analysis, examining technical details and actual social impacts needing further depth, and the selection and depth of analysis possibly limited by materials and length. Additionally, given the wide array of concepts and jurisdictions covered in this paper, conducting a thorough and systematic comparative legal study is challenging. Future research could focus on the actual effects of specific AI systems deployed in one or more countries and their impact on fundamental rights, conducting more in-depth comparative analysis or conducting more specific cross-disciplinary research on algorithm explainability, fairness, and legal applicability, exploring operational assessment standards.
In conclusion, mastering this mighty yet risk-laden technological “steed” of AI, ensuring it progresses steadily on the track of the rule of law, serving social development while not damaging human dignity and rights, is a long-term task requiring continuous wisdom, prudent exploration, and global collaboration. Only through steadfast protection of fundamental rights values and a deepened understanding of technology–society interactions can we properly guide AI development. This requires bold institutional innovation, theoretical advancements, and expansive cross-disciplinary collaboration to ensure AI enhances human well-being while advancing fairness and justice.