1. Introduction
As an introduction to the paper, it is essential first to illustrate the emergence of algorithmic bias through three examples. One of the most significant moments in this phenomenon dates back almost ten years. As an early development, the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, which is still used in the US criminal justice system today, was created as an artificial intelligence (AI)-based tool designed to “predict” the likelihood of a defendant’s recidivism, to guide judicial decisions, and provide data-driven criteria for sentencing (
Brennan et al. 2008;
Engel et al. 2024). The system was revolutionary in its own right; the software’s developers claimed that COMPAS would show results in an unbiased and objective manner, based solely on data-driven information (
Brennan and Dieterich 2017). The reality, however, was less idealistic. Early on, the system was criticized by experts and researchers for its errors and sometimes unpredictable decision patterns, its methodological shortcomings, and its ethical dilemmas (
Brennan et al. 2008;
Singh 2013;
Bouchagiar 2024;
Lippert-Rasmussen 2024). In 2016, however, COMPAS came to the fore with one of the most significant cases in a short but rich history of algorithmic bias. ProPublica, an independent, non-profit organization specializing mainly in investigative journalism, conducted an extensive investigation into the software’s proposed decisions and claimed that the algorithm showed significant racial bias against African Americans (
Angwin et al. 2016;
Purves and Davis 2023). The organization’s team of researchers and journalists examined data on more than 10,000 defendants in Broward County, Florida, specifically focusing on possible racial biases. Two years of COMPAS scores were obtained from the Broward County Sheriff’s Office, covering all individuals “scored” (meaning, evaluated per the possibility of recidivism based on specific algorithmic-driven metrics) in 2013 and 2014. At the end of the study, which involved a large amount of data, ProPublica calculated the overall accuracy of the algorithm. The report revealed extremely biased results. Non-recidivist African-American defendants were almost twice as likely to be misclassified as high risk than their white counterparts (45% vs. 23%), and recidivist white defendants were more often misclassified as low risk than African-American recidivists (48% vs. 28%). The violent recidivism analysis presented an even more dramatic picture. After accounting for prior offenses, future recidivism, age, and gender, COMPAS was 77% more likely to assign a higher risk score to African-American defendants than to white defendants (
Angwin et al. 2016). Although the ProPublica analysis has been questioned in many quarters by
Flores et al. (
2016) for methodological flaws and flawed data, research on the racially discriminatory practices of COMPAS and the algorithm it uses continues to be produced to date, largely confirming the claims of ProPublica (see
Lagioia et al. 2022;
Purves and Davis 2023).
The second example, which is also perhaps one of the most famous cases of algorithmic bias to date, is Amazon’s AI-based recruiter application and its short “life”. Dubbed “sexist AI” by the BBC (
BBC News 2018), Amazon began developing its software in 2014 with the goal of streamlining the recruitment process for prospective employees. The evaluation of prospective workers was based on a scoring system of their resumes, similar to the review scoring used for products sold on Amazon. The mechanism itself typically involved tasks that covered the job roles of human resources staff: it analyzed and sorted CVs, seeking to identify the best candidates for different positions based on the information on the CV. Notably, Amazon trained the tool on its own resumes submitted to the company over the past decade, using machine learning algorithms to identify patterns and evaluate candidates. The program introduced a specific scoring system ranging from one to five stars. By 2018, it became apparent that the tool systematically discriminated against women, particularly in technical roles, revealing significant flaws in the underlying data and algorithms (
Hofeditz et al. 2022). The root of the bias lay in the training data; previous CVs were predominantly from male applicants, and the algorithm had learned to associate men with successful, hired, and applicable candidates (
Hsu 2020).
Finally, a recent event that has garnered significant media coverage is worth concluding the list of examples. During the recently started term of the unprecedented democratization of AI (
Costa et al. 2024), in which the proliferation of various large language models (e.g., ChatGPT) has played a significant role, there has been fierce competition for the development of image-generating AI systems, with the most prominent participants in 2024 being DALL-E (now integrated into ChatGPT) and MidJourney. As a new entrant, Google developed a system called Gemini (Bard), a model similar to ChatGPT, with image generation as one of its features (
Imran and Almusharraf 2024;
Saeidnia 2023). However, the introduction of this latter feature proved to be highly controversial. In contrast to the two examples above, where the bias was based on the “inherent” social discrimination in the data sets, in the case of Gemini, it was the “overcompensation” by data trainers to filter out such problems that was the basis of the bias phenomenon. As users began to experiment with the tool, Gemini regularly produced historically inaccurate and ethnically diverse people and historical figures. Two striking examples of this were the depiction of an African-American National Socialist soldier and the American Indian (Amerindian) Viking warrior, both of which are pretty bizarre and absurdly incorrect in terms of both representation and historical accuracy (
Telegraph 2024). The bias in the Gemini outputs, as in the previous two cases, is rooted in the data used to train the AI, which over-represented certain perspectives and characteristics, resulting in historically inaccurate representations. Although unintended, these representations led to exceedingly offensive results. Although Google responded promptly and apologized, beginning to weed out the elements extremely rapidly that had caused the problem, Gemini is still criticized for its ethnic bias to this day (
Da Silveira and Lima 2024).
These three examples reflect the findings of Cathy
O’Neil (
2016, p. 19), one of the most respected researchers on algorithmic bias, who was one of the first authors to argue that new technologies are not eliminating human bias but merely masking it. While data may undoubtedly, by its very nature, appear to be independent and objective, the individual working with the data is not—hence the bias (see
Chen 2023).
The purpose of the present paper, given the diverse nature of algorithmic bias presented briefly above, is to critically examine it as a complex legal dilemma at the intersection of data governance, digital rights, and technology regulation, with particular attention to AI legislation. Our analysis examines the extent to which existing and proposed legal frameworks, primarily within the European Union and the United States, are equipped to address the systemic, technical, and ethical dimensions of bias in algorithmic decision-making systems. We employ a comparative and doctrinal legal approach to identify normative tensions between rights-based regulation and market-driven innovation. Via this method, we aim to evaluate the structural capacity of legal instruments to contend with opaque AI systems and reflect on the limits of jurisdiction-bound regulatory responses in the face of transnational technological development. Finally, our article argues for a more coherent, globally informed legal strategy that combines enforceable safeguards with technical literacy and institutional accountability.
2. The Concept of Algorithmic Bias
The connection of algorithmic bias to AI may seem self-evident, but it is worth briefly discussing the background. As
Shin and Shin (
2023) have underlined, human cognitive biases are often “built into” algorithms, and AI can amplify them. As the authors put it, in order for AI to adapt to the features and tasks that people prefer, it must learn these preferences, but learning human values carries risks (
Shin and Shin 2023). The relationship between bias and AI is thus inherent from a technological perspective (
Veale et al. 2018). At the heart of the concept of algorithmic bias is systematic/systemic bias. Although the literature on systemicity and bias is rich (
Kordzadeh and Ghasemaghaei 2022;
Johnson 2020), two definitional elements are worth highlighting in the present context. On the one hand, by “systematic”, we do not mean individual cases arising from a single error. To return to the Gemini example, image-generation systems often err or produce inappropriate output, but in the case of Google’s development, it was a recurring problem that was not a one-off glitch but—ab ovo—inherent in the design of the model (
Wang et al. 2024a). The term “systematic”, on the other hand, also means reproducibility (
Chen 2023), which is mainly due to errors in data collection and processing and the lack of appropriate transparency mechanisms (
Dodge et al. 2019). Perhaps the most challenging conceptual element to grasp is the notion of biased outcomes and inequalities as outputs. The reasons for this are manifold. Firstly, biased ‘nature’ and inequality are by no means technological concepts; as
Kordzadeh and Ghasemaghaei’s (
2022) summary study shows, there is very rich ’techno-philosophical’ literature on bias and ’partiality’. If we look at the issue of bias from a philosophical point of view, referring to the authors’ synthesis, bias is understood to cover the following phenomena:
- (1)
an algorithm distributes benefits and burdens unequally between different individuals or groups;
- (2)
this unequal distribution is due to differences in the inherent qualities, talents, or luck of individuals, so that
- (3)
Secondly, inequality as a problem essentially implies that there is an ethical standard for algorithm design, based on the assumption that algorithm-based “equality” is achievable and feasible if unintentionally built-in biases are filtered out. Some researchers refer to this concept as ‘perceived fairness’, which refers to the impartial, i.e., non-discriminatory, nature of the outputs generated by algorithms, decision-making processes, and algorithm design (
Hooker 2021;
Kirkpatrick 2016). To ensure this,
Zhou et al. (
2022) and
Filippi et al. (
2023) sought to reflect on the conceptual triad of explainability, accountability and transparency, which includes the argument that the inequalities caused by algorithms go beyond technological issues and the natural biases of algorithm creators, and that measurable and accountable systems should be created where biases can be detected and inequalities can be checked back.
3. The Relationship Between Algorithmic Bias and Regulation: Perspectives from the United States
The legal regulation of algorithmic bias is polemical; dozens of questions are posed to the regulator, but the answers often take years to arrive, and even if a solution is found, the original question may no longer be relevant. In this segment, we discuss the regulatory dilemmas of algorithmic bias, i.e., why addressing algorithmic bias is a legal dilemma in the first place.
Algorithmic bias concerns primarily privacy and AI regulation in general. “Regulation” in this context does not refer to specific legislation related explicitly to bias, but to the relevant rules of certain broader subject areas that may apply to algorithmic bias. It is not an exaggeration to say that the regulatory framework for this phenomenon is characterized by a multiplicity of initiatives at national, regional, and global level, not to mention the rules of platforms and AI companies themselves. It is therefore an evolving regulatory environment, which can be considered a sub-segment of AI regulation, characterized by a low level of harmonization and fragmentation (
Wang et al. 2024b). As in the case of other generative AI-related legal issues (cf.
Lendvai and Gosztonyi 2024), two main regulatory trends can be identified in the case of algorithmic bias: the European, highly restrictive but user-centric approach and the American, more liberal, “mixed” approach, which supports both technological and economic development (
Wang et al. 2024b). Given that the European regulatory approach is discussed in the next paragraph, we briefly focus on the US approach in the following segments.
Wang et al. (
2024b) argue that the US legal framework for algorithmic bias is rooted in fundamental civil rights protections and the Fourteenth Amendment, with a strong emphasis on three core principles: equality, non-discrimination and transparency. The “mixed” regulatory regime mentioned above stems from the fact that algorithmic bias is governed by both the relevant laws, primarily related to job search, employment and job placement, such as the Fair Credit Reporting Act (FCRA) and the Equal Employment Opportunity Commission (EEOC), which, among other things, address the importance of the impartiality of algorithms (
MacCarthy 2018). In addition, court decisions also play a significant role in both interpreting these laws in cases involving algorithmic discrimination and in cases dealing with employment and housing bias caused by ever-so-increasingly common automated systems (
Wang et al. 2024b). While federal efforts such as the EEOC guidelines and executive orders frame the U.S. approach to algorithmic fairness, state-level regulation also play an influential role, particularly in the absence of comprehensive federal legislation. For instance, the California Consumer Privacy Act (CCPA) and its successor, the California Privacy Rights Act (CPRA), introduced in 2020, grant consumers significant rights over automated decision making and data profiling, effectively creating a de facto standard given California’s market size (
Determann and Tam 2020). Though only partially pertaining to the holistic issues of algorithmic bias, Illinois’ Biometric Information Privacy Act (BIPA) is also to be mentioned, as this piece of legislation has set precedent through litigation against companies using facial recognition or biometric screening in ways that may encode bias. Lastly, novel legislative measures can also be mentioned concerning anti-discrimination issues. For instance, New York City’s Local Law 144, which mandates audits of AI-driven hiring tools for bias before deployment (
Koshiyama et al. 2022) while Colorado’s recently passed law is the first American legislative initiative which holistically aims to tackle algorithmic bias in decision making through oversight and problem assessment stipulations (
CBS News 2024). Nonetheless, despite sector-specific state laws, there is currently no single instrument specifically focusing on algorithmic bias in the United States.
Lastly, the October 2023 Executive Order (EO) on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued by then-President Joe Biden specifically shall be mentioned to underline the current state of the issue in the United States. The EO emphasized that US regulation will explicitly address algorithmic bias appropriately. The EO made a number of recommendations in this regard:
- (1)
Clear guidance should be provided to landlords, federal benefit programs, and federal contractors to prevent artificial intelligence algorithms from being used to exacerbate discrimination.
- (2)
Ensure and support cooperation between the Department of Justice and federal civil rights agencies to properly investigate civil rights violations involving AI and specifically algorithmic bias.
- (3)
Ensure algorithmic fairness and equity in the criminal justice system.
- (4)
For points (2) and (3), “best practices” should be developed.
Although the Biden administration appeared ambitious in proposing regulatory incentives for AI in the fall of 2023, following the Blueprint for an AI Bill of Rights proposal a year earlier, the progression of the “American AI Act” has been slow and unfruitful so far. The latter statement is also supported by the fact that newly re-elected President Donald Trump has already rescinded the EO (within a few hours of assuming office) on 20 January 2025. Moreover, the Trump administration seems to follow a vastly different approach from that of the Biden administration. As seen from the EO of 23 January 2025, the new regime aims at “removing barriers to American leadership in artificial intelligence” (
White House 2025). Trump’s EO outlines a policy to enhance American competitiveness, human well-being, and national security through responsible AI innovation. The order also mandates the creation of an AI Action Plan within 180 days, coordinated by top presidential advisors and relevant agencies—at the time of the drafting of the present paper, this has not occurred. Finally, it directs agencies to revise or rescind conflicting policies while clarifying that the order does not create enforceable legal rights (
White House 2025).
4. The European Approach—Shooting at Too Many Targets Without Hitting One?
Before outlining the European framework, it is essential to explain why the differences in regional legislation are so stark. The divergent regulatory approaches to algorithmic bias stem from foundational differences in legal philosophy, governance structures, and economic priorities. As opposed to the American approach, the EU has historically emphasized precautionary principles and user-centric rights, reflecting its broader commitment to data protection, human dignity, and social welfare (see
Walter 2024). This is evident in the GDPR (
Viterbo 2019) and AIA, which prioritize transparency, fairness, and the mitigation of systemic risk. However, a critical issue is largely underdiscussed, specifically, the political polarization and lobbying by powerful tech companies which have also slowed the adoption of laws. It is important to mention, in this context, that the European framework is based on a vastly different AI environment. As opposed other “big players” in AI development, Europe lack behind in developing leading AI technologies forcing the region in a so-called “AI tango” where competitiveness and legislation are constantly balanced (
Todorova et al. 2023). This dilemma transpires into a foundational question: should Europe remove constraints and give up the rigorous policymaking approach or should it be a leader in legislation but sacrifice a potential leading role in development.
In the context of algorithmic bias, especially in light of recent developments in the United States, the European legal framework clearly offers the most forward-looking regulatory approach, even if concerns have been expressed over EU law for not addressing all potential risks (see
Hacker 2018). There are many different approaches to EU legislation, and the following section sets out the options available in specific cases.
First, we present the data protection issues connecting them to Europe’s “flagship” data regulation instrument, the General Data Protection Regulation (hereinafter: “GDPR”). Article 22 of the GDPR is pivotal in the context of algorithmic bias (
Sancho 2020). According to Article 22(1), the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. The basis for this provision is the objective, also set out in the preamble, that the processing of personal data should be for the benefit of individuals (GDPR Recital 4) and that fair and, above all, transparent processing should be ensured (see GDPR 60). The primary concern with the provision, particularly in the context of algorithmic bias, is its legal implications and the definition of other types of “equally significant” decisions, which the GDPR only briefly addresses in the preamble (GDPR Recital 70). By legal effect, we primarily mean decisions or actions that affect someone’s rights, legal status, or even contractual rights. According to
Barbosa and Félix (
2021), examples include effects on voting rights, entitlement to a monthly pension on the grounds of disability, or the ability to enter a country. Similarly, interpreting a decision with a significant impact is a degree more complex (
Thouvenin et al. 2022). Although Recital 71 of the GDPR provides some support, beyond illustrative cases, it is not clear from the text of the GDPR what the precise scope of this type of decision is (cf.
Thouvenin et al. 2022).
Barbosa and Félix (
2021) suggest in this regard that the provision should be assessed primarily by its weight rather than its automatic nature; that is, while there is no doubt that the streaming platform offers a track in an automated way, presumably this does not have a significant or even comparable legal effect. It is also important to note, however, that there are exceptions to Article 22(1). In cases where (A) processing is necessary for the conclusion or performance of a contract, (B) EU law allows it, or (C) the data subject has given his or her explicit consent, it is not necessary to apply the above provision (
Thouvenin et al. 2022;
Malgieri 2019). However, in cases (A) and (C), the GDPR mentions an important exception, as the controller shall take appropriate measures to protect the rights, freedoms, and legitimate interests of the data subject, including at least the right to obtain human intervention by the controller, to express his or her point of view and to object to the decision (GDPR Article 22(3)). As
Bygrave (
2020) indicates, these options, while essentially ex post, are extremely forward-looking in the area of user and data protection and carry significant safeguards against a potentially offensive algorithmic bias resulting from a decision.
Another key regulation by the EU is the Digital Services Act (hereinafter: “DSA”). Enacted in late 2022, the DSA introduces ground-breaking rules in the area of platform regulation (
Lendvai 2024), and its provisions can also be partially applied to algorithmic bias. The DSA introduces a new regulatory “philosophy” whereby rules based on the category and size of different platforms are differentiated (
Laux et al. 2021;
Lendvai 2024;
Turillazzi et al. 2023). The issue of algorithms is highlighted in two segments in the DSA. Firstly, it is mentioned in Article 14, which concerns the contractual aspects of the terms and conditions of online platforms (
Quintais et al. 2023) and specifically states that service providers providing information on algorithmic decision making must be “clear, simple, understandable, user-friendly” and easily accessible to the recipient of the service. There is a greater focus on algorithms and their transparency in the case of very large online platforms and very large online search engines (abbreviated together as VLOP/VLOSE). These are platforms and search engines that have an average of at least 45 million active users per month in the EU per Article 33(1) of the DSA. It should be noted here, however, that a subjunctive condition for the status of VLOP/VLOPSE is that the Commission of the European Union must also classify the platform as such. The qualification procedure is not a one-off exercise; the Commission is constantly monitoring and expanding the list of VLOP/VLOSEs to outline the most complete picture of who the “giants” of platforms are. VLOP/VLOSEs are subject to several new, very progressive, yet rigorous rules; however, the most significant new stipulation is the obligation to assess risk in the area of algorithms. Indeed, Article 34 of the DSA requires VLOP/VLOSEs to carefully identify, analyze, and evaluate the systemic risks arising from the design or operation of their services and related systems, including algorithmic systems, or from the use of their services. Such systemic risks include, inter alia, the sharing of illegal content; negative impacts and practices on fundamental rights; threats to democratic discourse and public safety; and negative consequences for the physical and mental well-being of persons, concerning gender-based violence, public health, and the protection of minors (
Lendvai 2024;
Husovec 2024). Algorithmic bias can manifest itself in all of these systemic risks, such as the systematic or sometimes “invisible” deletion of certain opinions (also known as “shadowbanning”,
Jones (
2023)), disinformation campaigns, and the proliferation of online political propaganda and their exploitation by algorithms, or the exclusion of certain demographics from health information (see
Ratwani et al. 2024). If a VLOP/VLOSE identifies such a bias, it should first assess and then mitigate the resulting risks in accordance with Articles 34–35 of the DSA. A notable example of the latter is the need for platforms to test and verify the algorithmic systems they use in accordance with Article 35(1)(d) of the DSA. Another provision of the DSA (Article 40), which is particularly beneficial to researchers, stipulates that VLOPSEs should also, albeit in a narrow context, describe the design, logic, operation, and testing of their algorithmic systems, including their recommender systems (
Liesenfeld 2024).
The most significant chapter of EU regulation concerning algorithmic governance is, without a doubt, the Artificial Intelligence Act (AI Act, “AIA”), which was finally adopted in the summer of 2024, following significant anticipation. Although the term “algorithm” appears relatively infrequently in the AIA, several articles are applicable to the phenomenon of algorithmic bias. Bias first appears in Recital 27 of the AIA, which emphasizes that algorithmic bias in AI systems must be addressed through the promotion of diversity, fairness, and non-discrimination. The recital also refers to the ethical guidelines for trustworthy AI published on 8 April 2019 by the EU’s High-Level Expert Group on AI, which advocates for inclusive development and the avoidance of unfair or discriminatory impacts. Unlike the AIA itself, the guidelines define the concept of bias as tendencies or prejudices that (1) may influence outputs and (2) originate from various sources such as data collection, rule design, user interaction, or limited application contexts. The guidelines also accentuate that while bias can be intentional or unintentional and, in some cases, even beneficial, algorithmic bias often leads to discriminatory outcomes. This is what the guidelines refer to as “unfair bias.” To prevent unfair bias, the guidelines highlight the importance of addressing the root causes of the problem, for example, identifying and remedying inherent prejudices in data and discriminatory algorithmic design. While the guidelines are abstract, they classify measures such as transparent oversight and promoting diversity in development teams as key mitigation strategies.
Recital 48 of the AIA emphasizes that AI systems may have potentially harmful effects on fundamental rights, including non-discrimination, equality, and fairness. Although a detailed presentation of the AIA’s risk-based structure exceeds the scope of this paper, it should be emphasized that (similar to the DSA) a differentiated system is applied to AI systems. The regulation categorizes AI systems based on their level of risk, with distinct rules applying to prohibited, high-risk, and general-purpose systems that pose systemic risks (
Golpayegani et al. 2023). Although the four-tier risk classification structure proposed in earlier drafts changed substantially in the final text, the role and significance of algorithms remain consistent with prior versions (cf.
Novelli et al. 2023). Within this context, prohibited AI systems include those that severely interfere with or distort human decision making, exploit users’ vulnerabilities, classify individuals based on certain criteria (this passage clearly refers to the Chinese social credit and the related facial recognition system, see
Mac Síthigh and Siems 2019), predict criminal behavior using facial recognition databases, infer emotions, apply biometric categorization, or use real-time biometric identification in publicly accessible spaces for law enforcement purposes. Regarding biometric identification, Recital 32 explicitly addresses the issue of bias, stating that biased outcomes and discriminatory impacts are particularly relevant concerning age, ethnicity, race, gender, and disability, which is why such practices are broadly prohibited.
For high-risk AI systems, the AIA—similar to the DSA—introduces a risk assessment framework (
Kusche 2024). This framework mandates the ongoing identification, analysis, and mitigation of risks, including algorithmic bias. Although the term “algorithm” is not explicitly mentioned in these provisions, Article 13 of the AIA, which addresses the transparency of high-risk systems, can largely be interpreted as setting rules for algorithm transparency. Article 13 requires clear and comprehensive documentation explaining the capabilities and limitations of the AI system, including the possibility that it may produce biased results. Notably, Recital 67 discusses in detail the datasets used for training, validation, and testing. Here, the concern with algorithmic bias arises when datasets contain statistical information about individuals or groups, potentially leading to the exclusion of vulnerable groups. To mitigate such risks, these datasets must reflect the specific geographic, contextual, behavioral, or functional environments in which the AI system is intended to be used. Article 10 reinforces this by stating that all potential risks posed by high-risk AI systems that use such datasets must be investigated—especially those that may affect fundamental rights, lead to discrimination, or have any impact (whether negative, neutral, or positive) on individuals’ health and safety. Investigations are critically important when system outputs affect future inputs. For example, in predictive policing scenarios where prior data influences deployment recommendations.
In addition to investigation, preventive and mitigation measures must also be implemented for such AI systems. Furthermore, Article 10(5) emphasizes that high-risk AI providers may only process sensitive categories of personal data if this is strictly necessary for identifying or correcting bias, and if the detection of bias cannot be achieved through other means (
Van Bekkum 2025). Such data processing must comply with strict safeguards, including pseudonymization, limited access, security checks, prohibition of sharing with third parties, and timely deletion. Detailed documentation must also justify the necessity of using sensitive data and demonstrate compliance with EU data protection laws. Lastly, Article 14 outlines requirements for effective human oversight of high-risk systems to minimize risks to health, safety, and fundamental rights. In the context of algorithmic bias, Article 14(4)(b) is particularly important as it addresses automation bias. The regulation mandates that oversight mechanisms must help users understand these risks and ensure that individuals are capable of questioning or overriding AI decisions. Article 15 also provides guidance on system design and bias mitigation, stating that high-risk AI systems must be designed to minimize biased feedback loops, for instance, in systems that continue to learn post-deployment. This includes implementing technical and organizational measures to prevent biased outputs from influencing future inputs and ensuring active mitigation of such risks.
Finally, it is also critical to briefly mention the longstanding tradition of European anti-discrimination laws. Unarguably, the GDPR, DSA, and AIA represent the core of the EU’s digital regulatory architecture, which is also built upon a longstanding legal foundation where anti-discrimination serves as a core principle. For instance, directives such as the Racial Equality Directive (2000/43/EC of 29 June 2000), the Employment Equality Directive (2000/78/EC of 27 November 2000), or the Equal Treatment Directive (2006/54/EC of 5 July 2006) have all established robust principles of equal treatment across domains including employment, education, and access to goods and services (
Ellis and Watson 2012). These instruments have been interpreted by the Court of Justice of the European Union (CJEU) to reinforce substantive equality, including in contexts involving indirect or systemic discrimination (see
Frese 2021). Moreover, all EU member states are bound by the European Convention on Human Rights, which, in Article 14, prohibits discrimination on a broad range of grounds, including race, sex, language, and political or social origin. Though these legal instruments predate the emergence of algorithmic governance, they remain directly relevant today; algorithmic bias often results in precisely the kind of disparate impact that EU law has long sought to prevent.
5. Algorithmic Bias, the Modern Lernaean Hydra?—Dilemmas and Future Regulatory Issues
To use a mythological analogy, the regulation of AI, especially algorithms, is like Heracles’ battle with the Lernaean Hydra, meaning that for every regulatory solution, new and more complex challenges emerge. Among the most persistent is the so-called “black box” effect, an epistemic and technical opacity that defines many AI systems, particularly those based on deep learning architectures (
Savage 2022). These models derive patterns from vast datasets, utilizing millions of parameters and layers, which results in outputs whose internal logic is inaccessible, even to their developers (
Brożek et al. 2023). Unlike rule-based systems, where decisions can be retraced, machine learning systems often cannot provide a clear causal link between input data and resulting outcomes. This latter issue, however, severely hampers the potential possibility of a comprehensive legal oversight. If we take the stance of the regulator, the lack of interpretability essentially means that identifying discriminatory patterns or challenging flawed reasoning becomes nearly impossible. Though the AIA indeed mandates a comprehensive set of stipulations promoting transparency for high-risk AI systems, these obligations focus more on procedural compliance than substantive interpretability. This approach gives rise to a practical problem: no current EU or national regulation mandates that algorithmic processes be explainable in a manner that a layperson—or sometimes even an auditor—can consistently understand. Furthermore, circling back to the aforementioned underlying issues, trade secret protections are often invoked to avoid disclosing internal model logic, creating a paradox where systems may be lawful on paper but illegible in practice (see
Foss-Solbrekk 2023). This paradox fosters an environment in which algorithmic bias can persist unchecked, as even diligent regulators lack the tools to audit or remedy systemic inequalities effectively, resulting in a peculiar legal situation where without independent technical audits, enforceable standards of explainability, and mechanisms to challenge opacity claims, the EU’s stated goals of fairness, accountability, and non-discrimination remain aspirational.
Another obstacle could be enforcement. The AIA, in particular, lays down enforcement rules. Nonetheless, it is questionable whether adequate resources and, most importantly, expertise can be devoted to ensuring proper enforcement. In this respect, it is of grave concern that the European AI Office, which was established by the AIA and which also aims to contribute to the implementation, monitoring and supervision of MI systems and general purpose AI systems and AI governance has been able to deliver mainly administrative results in the last year, while the EU’s AI Board supporting the work of the Office has met a total of twice so far without any substantive results at the time of the writing of this study (
EU 2024). The lack of resources and slow bureaucratic processes are even more pronounced at the national level, as most Member States do not yet have a dedicated authority for AI and its monitoring. In this regard, we argue that closing the gap between legal regulation and technical implementation necessitates a deeper engagement with existing methods for mitigating algorithmic bias. In practice, bias can be addressed at three main stages: pre-processing (PRP), in-processing (IP), and post-processing (POP) (
Kim and Cho 2022;
González-Sendino et al. 2024). Methods include well-developed techniques such as fair representation in the PRP phase, which aims at eliminating embedded biases in datasets while preserving the data’s critical structural and semantic properties, incorporating fairness constraints in the IP stage or PRP threshold-adjusting mechanisms, which allow for balancing outcomes across different groups (
Kim and Cho 2022). These techniques are also essential complements to legal instruments, such as the AI Act. Moreover, without technical audits, it is nearly inviable for regulators to evaluate whether a system complies with fairness mandates. Therefore, we suggest that legal responses to algorithmic bias will remain superficial, or worse, symbolic, unless they are accompanied by robust, context-sensitive mitigation strategies developed and validated by computer scientists, ethicists, and domain experts.
Perhaps an even bigger problem, and one that goes beyond legal regulation, is the lack of harmonization in the regulation of algorithms. As the UN Secretary General’s Expert Panel on AI indicated in autumn 2024, there is an “irrefutable” need for global regulation, and market forces must not be allowed to dictate AI development and regulatory boundaries (
UN 2024). The report also raises concerns that, without proper global governance, the benefits of AI may be limited to a select few nations and organizations, which could exacerbate the already existing digital divide and inequalities (
UN 2024).
Lastly, it is fundamental to question whether there could be an “ideal” framework. Though we cannot answer this query with absolute certainty, we claim that an optimal legislative framework to regulate algorithmic bias would integrate the European Union’s strong emphasis on fundamental rights and transparency with the United States’ flexibility and innovation-driven pragmatism. By this, we mean that from the EU model, the framework should retain robust ex ante safeguards (such as risk classification, mandatory documentation, and user rights to explanation), ensuring that systems are not only compliant but also accountable by design. On the other hand, the U.S. approach should adopt context-sensitive standards that allow for technological experimentation while ensuring outcomes do not disproportionately harm protected groups. Furthermore, such legislation must also provide precise enforcement mechanisms, including independent audits and meaningful sanctions, coupled with incentives for private-sector innovation in fairness-enhancing technologies. Most importantly, however, the ideal regulation would be adaptive, capable of responding to emerging AI applications without becoming obsolete or overly rigid.
From a more theoretical perspective, the ideal framework is also to define equality more precisely. This question emerges from the fact that while legal frameworks often—innately—assume that fairness can be achieved through neutral, data-driven processing, this assumption collapses when viewed against entrenched social and economic inequalities. A striking example is the widely cited case of discriminatory healthcare algorithms in the United States that predicted the need for medical intervention based on past healthcare expenditures often favoring one group to other, more marginalized groups (
Obermeyer et al. 2019). Furthermore, a recent precedent must also be cited as the contested nature of equality in algorithmic design is mirrored in recent legal developments. A particularly illustrative case is the U.S. Supreme Court’s 2023 decision in Students for Fair Admissions v. Harvard and its companion case against the University of North Carolina (UNC), in which the Court ruled that race-conscious admissions policies violated the Equal Protection Clause of the Fourteenth Amendment. In the 6–2 (v. Harvard) and 6–3 (v. UNC) respective decisions, the majority held that such policies lacked clear, measurable objectives and imposed unjustified burdens on certain applicants, particularly Asian Americans. Although the case concerned higher education admission procedures rather than automated systems, the decisions highlight a growing judicial preference for formal, colorblind interpretations of equality over substantive, redistributive approaches. This legislative trend also has direct implications for algorithmic fairness. Given that legal norms increasingly prioritize neutrality over equity, developers and regulators may find themselves constrained in implementing algorithmic models that deliberately account for socioeconomic inequality, resulting in further questions about the scope and legitimacy of fairness interventions in automated systems. This dilemma leads to a deeper issue, which appears to be embedded in the question of whether algorithmic equality or equity is an “objective” criterion or is, in fact, a proxy for structural privilege. Thus, algorithmic equality may not be a mere empirical problem to be corrected with better data but much rather a normative construct that requires deliberate intervention. Accordingly, regulatory frameworks must go beyond requiring explainability or transparency, and grapple with the ethical decisions inherent in variable selection, weight assignment, and outcome optimization.