Next Article in Journal
Incremental Progress in Combating IUU Fishing: A Review of China’s 2020 Administrative Regulations for Distant-Water Fisheries
Previous Article in Journal
A Problem-Solving Court for Crimes Against Older Adults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithmic Bias as a Core Legal Dilemma in the Age of Artificial Intelligence: Conceptual Basis and the Current State of Regulation

by
Gergely Ferenc Lendvai
1 and
Gergely Gosztonyi
2,*
1
Faculty of Public Governance and International Studies, Ludovika University of Public Service, 1083 Budapest, Hungary
2
Digital Authoritarianism Research Lab (DARL), Faculty of Law, Eötvös Loránd University (ELTE), 1053 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Laws 2025, 14(3), 41; https://doi.org/10.3390/laws14030041
Submission received: 11 April 2025 / Revised: 25 May 2025 / Accepted: 3 June 2025 / Published: 12 June 2025

Abstract

This article examines algorithmic bias as a pressing legal challenge, situating the issue within the broader context of artificial intelligence (AI) governance. We employed comparative legal analysis and reviewed pertinent regulatory documents to examine how the fragmented U.S. approaches and the EU’s user-centric legal frameworks, such as the GDPR, DSA, and AI Act, address the systemic risks posed by biased algorithms. The findings underscore persistent enforcement gaps, particularly concerning opaque black-box algorithmic design, which hampers bias detection and remediation. The paper highlights how current regulatory efforts disproportionately affect marginalized communities and fail to provide effective protection across jurisdictions. It also identifies structural imbalances in legal instruments, particularly in relation to risk classification, transparency, and fairness standards. Notably, emerging regulations often lack the technical and ethical capacity for implementation. We argue that global cooperation is not only necessary but inevitable, as regional solutions alone are insufficient to govern transnational AI systems. Without harmonized international standards, algorithmic bias will continue to reproduce existing inequalities under the guise of objectivity. The article advocates for inclusive, cross-sectoral collaboration among governments, developers, and civil society to ensure the responsible development of AI and uphold fundamental rights.

1. Introduction

As an introduction to the paper, it is essential first to illustrate the emergence of algorithmic bias through three examples. One of the most significant moments in this phenomenon dates back almost ten years. As an early development, the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, which is still used in the US criminal justice system today, was created as an artificial intelligence (AI)-based tool designed to “predict” the likelihood of a defendant’s recidivism, to guide judicial decisions, and provide data-driven criteria for sentencing (Brennan et al. 2008; Engel et al. 2024). The system was revolutionary in its own right; the software’s developers claimed that COMPAS would show results in an unbiased and objective manner, based solely on data-driven information (Brennan and Dieterich 2017). The reality, however, was less idealistic. Early on, the system was criticized by experts and researchers for its errors and sometimes unpredictable decision patterns, its methodological shortcomings, and its ethical dilemmas (Brennan et al. 2008; Singh 2013; Bouchagiar 2024; Lippert-Rasmussen 2024). In 2016, however, COMPAS came to the fore with one of the most significant cases in a short but rich history of algorithmic bias. ProPublica, an independent, non-profit organization specializing mainly in investigative journalism, conducted an extensive investigation into the software’s proposed decisions and claimed that the algorithm showed significant racial bias against African Americans (Angwin et al. 2016; Purves and Davis 2023). The organization’s team of researchers and journalists examined data on more than 10,000 defendants in Broward County, Florida, specifically focusing on possible racial biases. Two years of COMPAS scores were obtained from the Broward County Sheriff’s Office, covering all individuals “scored” (meaning, evaluated per the possibility of recidivism based on specific algorithmic-driven metrics) in 2013 and 2014. At the end of the study, which involved a large amount of data, ProPublica calculated the overall accuracy of the algorithm. The report revealed extremely biased results. Non-recidivist African-American defendants were almost twice as likely to be misclassified as high risk than their white counterparts (45% vs. 23%), and recidivist white defendants were more often misclassified as low risk than African-American recidivists (48% vs. 28%). The violent recidivism analysis presented an even more dramatic picture. After accounting for prior offenses, future recidivism, age, and gender, COMPAS was 77% more likely to assign a higher risk score to African-American defendants than to white defendants (Angwin et al. 2016). Although the ProPublica analysis has been questioned in many quarters by Flores et al. (2016) for methodological flaws and flawed data, research on the racially discriminatory practices of COMPAS and the algorithm it uses continues to be produced to date, largely confirming the claims of ProPublica (see Lagioia et al. 2022; Purves and Davis 2023).
The second example, which is also perhaps one of the most famous cases of algorithmic bias to date, is Amazon’s AI-based recruiter application and its short “life”. Dubbed “sexist AI” by the BBC (BBC News 2018), Amazon began developing its software in 2014 with the goal of streamlining the recruitment process for prospective employees. The evaluation of prospective workers was based on a scoring system of their resumes, similar to the review scoring used for products sold on Amazon. The mechanism itself typically involved tasks that covered the job roles of human resources staff: it analyzed and sorted CVs, seeking to identify the best candidates for different positions based on the information on the CV. Notably, Amazon trained the tool on its own resumes submitted to the company over the past decade, using machine learning algorithms to identify patterns and evaluate candidates. The program introduced a specific scoring system ranging from one to five stars. By 2018, it became apparent that the tool systematically discriminated against women, particularly in technical roles, revealing significant flaws in the underlying data and algorithms (Hofeditz et al. 2022). The root of the bias lay in the training data; previous CVs were predominantly from male applicants, and the algorithm had learned to associate men with successful, hired, and applicable candidates (Hsu 2020).
Finally, a recent event that has garnered significant media coverage is worth concluding the list of examples. During the recently started term of the unprecedented democratization of AI (Costa et al. 2024), in which the proliferation of various large language models (e.g., ChatGPT) has played a significant role, there has been fierce competition for the development of image-generating AI systems, with the most prominent participants in 2024 being DALL-E (now integrated into ChatGPT) and MidJourney. As a new entrant, Google developed a system called Gemini (Bard), a model similar to ChatGPT, with image generation as one of its features (Imran and Almusharraf 2024; Saeidnia 2023). However, the introduction of this latter feature proved to be highly controversial. In contrast to the two examples above, where the bias was based on the “inherent” social discrimination in the data sets, in the case of Gemini, it was the “overcompensation” by data trainers to filter out such problems that was the basis of the bias phenomenon. As users began to experiment with the tool, Gemini regularly produced historically inaccurate and ethnically diverse people and historical figures. Two striking examples of this were the depiction of an African-American National Socialist soldier and the American Indian (Amerindian) Viking warrior, both of which are pretty bizarre and absurdly incorrect in terms of both representation and historical accuracy (Telegraph 2024). The bias in the Gemini outputs, as in the previous two cases, is rooted in the data used to train the AI, which over-represented certain perspectives and characteristics, resulting in historically inaccurate representations. Although unintended, these representations led to exceedingly offensive results. Although Google responded promptly and apologized, beginning to weed out the elements extremely rapidly that had caused the problem, Gemini is still criticized for its ethnic bias to this day (Da Silveira and Lima 2024).
These three examples reflect the findings of Cathy O’Neil (2016, p. 19), one of the most respected researchers on algorithmic bias, who was one of the first authors to argue that new technologies are not eliminating human bias but merely masking it. While data may undoubtedly, by its very nature, appear to be independent and objective, the individual working with the data is not—hence the bias (see Chen 2023).
The purpose of the present paper, given the diverse nature of algorithmic bias presented briefly above, is to critically examine it as a complex legal dilemma at the intersection of data governance, digital rights, and technology regulation, with particular attention to AI legislation. Our analysis examines the extent to which existing and proposed legal frameworks, primarily within the European Union and the United States, are equipped to address the systemic, technical, and ethical dimensions of bias in algorithmic decision-making systems. We employ a comparative and doctrinal legal approach to identify normative tensions between rights-based regulation and market-driven innovation. Via this method, we aim to evaluate the structural capacity of legal instruments to contend with opaque AI systems and reflect on the limits of jurisdiction-bound regulatory responses in the face of transnational technological development. Finally, our article argues for a more coherent, globally informed legal strategy that combines enforceable safeguards with technical literacy and institutional accountability.

2. The Concept of Algorithmic Bias

As in the case of most digital phenomena, algorithmic bias is also problematic in formulating a uniform concept (Kim and Cho 2022). Generally speaking, algorithmic bias refers to systematic and structured errors and bias points in AI systems or AI-based systems that produce biased results and inequalities without any justifiable reason (Robert et al. 2020; Kordzadeh and Ghasemaghaei 2022; Fazelpour and Danks 2021; Johnson 2020; Hooker 2021; Robert et al. 2020). It is proposed to break the concept down into its elements.
The connection of algorithmic bias to AI may seem self-evident, but it is worth briefly discussing the background. As Shin and Shin (2023) have underlined, human cognitive biases are often “built into” algorithms, and AI can amplify them. As the authors put it, in order for AI to adapt to the features and tasks that people prefer, it must learn these preferences, but learning human values carries risks (Shin and Shin 2023). The relationship between bias and AI is thus inherent from a technological perspective (Veale et al. 2018). At the heart of the concept of algorithmic bias is systematic/systemic bias. Although the literature on systemicity and bias is rich (Kordzadeh and Ghasemaghaei 2022; Johnson 2020), two definitional elements are worth highlighting in the present context. On the one hand, by “systematic”, we do not mean individual cases arising from a single error. To return to the Gemini example, image-generation systems often err or produce inappropriate output, but in the case of Google’s development, it was a recurring problem that was not a one-off glitch but—ab ovo—inherent in the design of the model (Wang et al. 2024a). The term “systematic”, on the other hand, also means reproducibility (Chen 2023), which is mainly due to errors in data collection and processing and the lack of appropriate transparency mechanisms (Dodge et al. 2019). Perhaps the most challenging conceptual element to grasp is the notion of biased outcomes and inequalities as outputs. The reasons for this are manifold. Firstly, biased ‘nature’ and inequality are by no means technological concepts; as Kordzadeh and Ghasemaghaei’s (2022) summary study shows, there is very rich ’techno-philosophical’ literature on bias and ’partiality’. If we look at the issue of bias from a philosophical point of view, referring to the authors’ synthesis, bias is understood to cover the following phenomena:
(1)
an algorithm distributes benefits and burdens unequally between different individuals or groups;
(2)
this unequal distribution is due to differences in the inherent qualities, talents, or luck of individuals, so that
(3)
algorithmic bias is a systematic deviation from equality in the outputs of an algorithm (Kordzadeh and Ghasemaghaei 2022).
Secondly, inequality as a problem essentially implies that there is an ethical standard for algorithm design, based on the assumption that algorithm-based “equality” is achievable and feasible if unintentionally built-in biases are filtered out. Some researchers refer to this concept as ‘perceived fairness’, which refers to the impartial, i.e., non-discriminatory, nature of the outputs generated by algorithms, decision-making processes, and algorithm design (Hooker 2021; Kirkpatrick 2016). To ensure this, Zhou et al. (2022) and Filippi et al. (2023) sought to reflect on the conceptual triad of explainability, accountability and transparency, which includes the argument that the inequalities caused by algorithms go beyond technological issues and the natural biases of algorithm creators, and that measurable and accountable systems should be created where biases can be detected and inequalities can be checked back.

3. The Relationship Between Algorithmic Bias and Regulation: Perspectives from the United States

The legal regulation of algorithmic bias is polemical; dozens of questions are posed to the regulator, but the answers often take years to arrive, and even if a solution is found, the original question may no longer be relevant. In this segment, we discuss the regulatory dilemmas of algorithmic bias, i.e., why addressing algorithmic bias is a legal dilemma in the first place.
Algorithmic bias concerns primarily privacy and AI regulation in general. “Regulation” in this context does not refer to specific legislation related explicitly to bias, but to the relevant rules of certain broader subject areas that may apply to algorithmic bias. It is not an exaggeration to say that the regulatory framework for this phenomenon is characterized by a multiplicity of initiatives at national, regional, and global level, not to mention the rules of platforms and AI companies themselves. It is therefore an evolving regulatory environment, which can be considered a sub-segment of AI regulation, characterized by a low level of harmonization and fragmentation (Wang et al. 2024b). As in the case of other generative AI-related legal issues (cf. Lendvai and Gosztonyi 2024), two main regulatory trends can be identified in the case of algorithmic bias: the European, highly restrictive but user-centric approach and the American, more liberal, “mixed” approach, which supports both technological and economic development (Wang et al. 2024b). Given that the European regulatory approach is discussed in the next paragraph, we briefly focus on the US approach in the following segments.
Wang et al. (2024b) argue that the US legal framework for algorithmic bias is rooted in fundamental civil rights protections and the Fourteenth Amendment, with a strong emphasis on three core principles: equality, non-discrimination and transparency. The “mixed” regulatory regime mentioned above stems from the fact that algorithmic bias is governed by both the relevant laws, primarily related to job search, employment and job placement, such as the Fair Credit Reporting Act (FCRA) and the Equal Employment Opportunity Commission (EEOC), which, among other things, address the importance of the impartiality of algorithms (MacCarthy 2018). In addition, court decisions also play a significant role in both interpreting these laws in cases involving algorithmic discrimination and in cases dealing with employment and housing bias caused by ever-so-increasingly common automated systems (Wang et al. 2024b). While federal efforts such as the EEOC guidelines and executive orders frame the U.S. approach to algorithmic fairness, state-level regulation also play an influential role, particularly in the absence of comprehensive federal legislation. For instance, the California Consumer Privacy Act (CCPA) and its successor, the California Privacy Rights Act (CPRA), introduced in 2020, grant consumers significant rights over automated decision making and data profiling, effectively creating a de facto standard given California’s market size (Determann and Tam 2020). Though only partially pertaining to the holistic issues of algorithmic bias, Illinois’ Biometric Information Privacy Act (BIPA) is also to be mentioned, as this piece of legislation has set precedent through litigation against companies using facial recognition or biometric screening in ways that may encode bias. Lastly, novel legislative measures can also be mentioned concerning anti-discrimination issues. For instance, New York City’s Local Law 144, which mandates audits of AI-driven hiring tools for bias before deployment (Koshiyama et al. 2022) while Colorado’s recently passed law is the first American legislative initiative which holistically aims to tackle algorithmic bias in decision making through oversight and problem assessment stipulations (CBS News 2024). Nonetheless, despite sector-specific state laws, there is currently no single instrument specifically focusing on algorithmic bias in the United States.
Lastly, the October 2023 Executive Order (EO) on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued by then-President Joe Biden specifically shall be mentioned to underline the current state of the issue in the United States. The EO emphasized that US regulation will explicitly address algorithmic bias appropriately. The EO made a number of recommendations in this regard:
(1)
Clear guidance should be provided to landlords, federal benefit programs, and federal contractors to prevent artificial intelligence algorithms from being used to exacerbate discrimination.
(2)
Ensure and support cooperation between the Department of Justice and federal civil rights agencies to properly investigate civil rights violations involving AI and specifically algorithmic bias.
(3)
Ensure algorithmic fairness and equity in the criminal justice system.
(4)
For points (2) and (3), “best practices” should be developed.
Although the Biden administration appeared ambitious in proposing regulatory incentives for AI in the fall of 2023, following the Blueprint for an AI Bill of Rights proposal a year earlier, the progression of the “American AI Act” has been slow and unfruitful so far. The latter statement is also supported by the fact that newly re-elected President Donald Trump has already rescinded the EO (within a few hours of assuming office) on 20 January 2025. Moreover, the Trump administration seems to follow a vastly different approach from that of the Biden administration. As seen from the EO of 23 January 2025, the new regime aims at “removing barriers to American leadership in artificial intelligence” (White House 2025). Trump’s EO outlines a policy to enhance American competitiveness, human well-being, and national security through responsible AI innovation. The order also mandates the creation of an AI Action Plan within 180 days, coordinated by top presidential advisors and relevant agencies—at the time of the drafting of the present paper, this has not occurred. Finally, it directs agencies to revise or rescind conflicting policies while clarifying that the order does not create enforceable legal rights (White House 2025).

4. The European Approach—Shooting at Too Many Targets Without Hitting One?

Before outlining the European framework, it is essential to explain why the differences in regional legislation are so stark. The divergent regulatory approaches to algorithmic bias stem from foundational differences in legal philosophy, governance structures, and economic priorities. As opposed to the American approach, the EU has historically emphasized precautionary principles and user-centric rights, reflecting its broader commitment to data protection, human dignity, and social welfare (see Walter 2024). This is evident in the GDPR (Viterbo 2019) and AIA, which prioritize transparency, fairness, and the mitigation of systemic risk. However, a critical issue is largely underdiscussed, specifically, the political polarization and lobbying by powerful tech companies which have also slowed the adoption of laws. It is important to mention, in this context, that the European framework is based on a vastly different AI environment. As opposed other “big players” in AI development, Europe lack behind in developing leading AI technologies forcing the region in a so-called “AI tango” where competitiveness and legislation are constantly balanced (Todorova et al. 2023). This dilemma transpires into a foundational question: should Europe remove constraints and give up the rigorous policymaking approach or should it be a leader in legislation but sacrifice a potential leading role in development.
In the context of algorithmic bias, especially in light of recent developments in the United States, the European legal framework clearly offers the most forward-looking regulatory approach, even if concerns have been expressed over EU law for not addressing all potential risks (see Hacker 2018). There are many different approaches to EU legislation, and the following section sets out the options available in specific cases.
First, we present the data protection issues connecting them to Europe’s “flagship” data regulation instrument, the General Data Protection Regulation (hereinafter: “GDPR”). Article 22 of the GDPR is pivotal in the context of algorithmic bias (Sancho 2020). According to Article 22(1), the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. The basis for this provision is the objective, also set out in the preamble, that the processing of personal data should be for the benefit of individuals (GDPR Recital 4) and that fair and, above all, transparent processing should be ensured (see GDPR 60). The primary concern with the provision, particularly in the context of algorithmic bias, is its legal implications and the definition of other types of “equally significant” decisions, which the GDPR only briefly addresses in the preamble (GDPR Recital 70). By legal effect, we primarily mean decisions or actions that affect someone’s rights, legal status, or even contractual rights. According to Barbosa and Félix (2021), examples include effects on voting rights, entitlement to a monthly pension on the grounds of disability, or the ability to enter a country. Similarly, interpreting a decision with a significant impact is a degree more complex (Thouvenin et al. 2022). Although Recital 71 of the GDPR provides some support, beyond illustrative cases, it is not clear from the text of the GDPR what the precise scope of this type of decision is (cf. Thouvenin et al. 2022). Barbosa and Félix (2021) suggest in this regard that the provision should be assessed primarily by its weight rather than its automatic nature; that is, while there is no doubt that the streaming platform offers a track in an automated way, presumably this does not have a significant or even comparable legal effect. It is also important to note, however, that there are exceptions to Article 22(1). In cases where (A) processing is necessary for the conclusion or performance of a contract, (B) EU law allows it, or (C) the data subject has given his or her explicit consent, it is not necessary to apply the above provision (Thouvenin et al. 2022; Malgieri 2019). However, in cases (A) and (C), the GDPR mentions an important exception, as the controller shall take appropriate measures to protect the rights, freedoms, and legitimate interests of the data subject, including at least the right to obtain human intervention by the controller, to express his or her point of view and to object to the decision (GDPR Article 22(3)). As Bygrave (2020) indicates, these options, while essentially ex post, are extremely forward-looking in the area of user and data protection and carry significant safeguards against a potentially offensive algorithmic bias resulting from a decision.
Another key regulation by the EU is the Digital Services Act (hereinafter: “DSA”). Enacted in late 2022, the DSA introduces ground-breaking rules in the area of platform regulation (Lendvai 2024), and its provisions can also be partially applied to algorithmic bias. The DSA introduces a new regulatory “philosophy” whereby rules based on the category and size of different platforms are differentiated (Laux et al. 2021; Lendvai 2024; Turillazzi et al. 2023). The issue of algorithms is highlighted in two segments in the DSA. Firstly, it is mentioned in Article 14, which concerns the contractual aspects of the terms and conditions of online platforms (Quintais et al. 2023) and specifically states that service providers providing information on algorithmic decision making must be “clear, simple, understandable, user-friendly” and easily accessible to the recipient of the service. There is a greater focus on algorithms and their transparency in the case of very large online platforms and very large online search engines (abbreviated together as VLOP/VLOSE). These are platforms and search engines that have an average of at least 45 million active users per month in the EU per Article 33(1) of the DSA. It should be noted here, however, that a subjunctive condition for the status of VLOP/VLOPSE is that the Commission of the European Union must also classify the platform as such. The qualification procedure is not a one-off exercise; the Commission is constantly monitoring and expanding the list of VLOP/VLOSEs to outline the most complete picture of who the “giants” of platforms are. VLOP/VLOSEs are subject to several new, very progressive, yet rigorous rules; however, the most significant new stipulation is the obligation to assess risk in the area of algorithms. Indeed, Article 34 of the DSA requires VLOP/VLOSEs to carefully identify, analyze, and evaluate the systemic risks arising from the design or operation of their services and related systems, including algorithmic systems, or from the use of their services. Such systemic risks include, inter alia, the sharing of illegal content; negative impacts and practices on fundamental rights; threats to democratic discourse and public safety; and negative consequences for the physical and mental well-being of persons, concerning gender-based violence, public health, and the protection of minors (Lendvai 2024; Husovec 2024). Algorithmic bias can manifest itself in all of these systemic risks, such as the systematic or sometimes “invisible” deletion of certain opinions (also known as “shadowbanning”, Jones (2023)), disinformation campaigns, and the proliferation of online political propaganda and their exploitation by algorithms, or the exclusion of certain demographics from health information (see Ratwani et al. 2024). If a VLOP/VLOSE identifies such a bias, it should first assess and then mitigate the resulting risks in accordance with Articles 34–35 of the DSA. A notable example of the latter is the need for platforms to test and verify the algorithmic systems they use in accordance with Article 35(1)(d) of the DSA. Another provision of the DSA (Article 40), which is particularly beneficial to researchers, stipulates that VLOPSEs should also, albeit in a narrow context, describe the design, logic, operation, and testing of their algorithmic systems, including their recommender systems (Liesenfeld 2024).
The most significant chapter of EU regulation concerning algorithmic governance is, without a doubt, the Artificial Intelligence Act (AI Act, “AIA”), which was finally adopted in the summer of 2024, following significant anticipation. Although the term “algorithm” appears relatively infrequently in the AIA, several articles are applicable to the phenomenon of algorithmic bias. Bias first appears in Recital 27 of the AIA, which emphasizes that algorithmic bias in AI systems must be addressed through the promotion of diversity, fairness, and non-discrimination. The recital also refers to the ethical guidelines for trustworthy AI published on 8 April 2019 by the EU’s High-Level Expert Group on AI, which advocates for inclusive development and the avoidance of unfair or discriminatory impacts. Unlike the AIA itself, the guidelines define the concept of bias as tendencies or prejudices that (1) may influence outputs and (2) originate from various sources such as data collection, rule design, user interaction, or limited application contexts. The guidelines also accentuate that while bias can be intentional or unintentional and, in some cases, even beneficial, algorithmic bias often leads to discriminatory outcomes. This is what the guidelines refer to as “unfair bias.” To prevent unfair bias, the guidelines highlight the importance of addressing the root causes of the problem, for example, identifying and remedying inherent prejudices in data and discriminatory algorithmic design. While the guidelines are abstract, they classify measures such as transparent oversight and promoting diversity in development teams as key mitigation strategies.
Recital 48 of the AIA emphasizes that AI systems may have potentially harmful effects on fundamental rights, including non-discrimination, equality, and fairness. Although a detailed presentation of the AIA’s risk-based structure exceeds the scope of this paper, it should be emphasized that (similar to the DSA) a differentiated system is applied to AI systems. The regulation categorizes AI systems based on their level of risk, with distinct rules applying to prohibited, high-risk, and general-purpose systems that pose systemic risks (Golpayegani et al. 2023). Although the four-tier risk classification structure proposed in earlier drafts changed substantially in the final text, the role and significance of algorithms remain consistent with prior versions (cf. Novelli et al. 2023). Within this context, prohibited AI systems include those that severely interfere with or distort human decision making, exploit users’ vulnerabilities, classify individuals based on certain criteria (this passage clearly refers to the Chinese social credit and the related facial recognition system, see Mac Síthigh and Siems 2019), predict criminal behavior using facial recognition databases, infer emotions, apply biometric categorization, or use real-time biometric identification in publicly accessible spaces for law enforcement purposes. Regarding biometric identification, Recital 32 explicitly addresses the issue of bias, stating that biased outcomes and discriminatory impacts are particularly relevant concerning age, ethnicity, race, gender, and disability, which is why such practices are broadly prohibited.
For high-risk AI systems, the AIA—similar to the DSA—introduces a risk assessment framework (Kusche 2024). This framework mandates the ongoing identification, analysis, and mitigation of risks, including algorithmic bias. Although the term “algorithm” is not explicitly mentioned in these provisions, Article 13 of the AIA, which addresses the transparency of high-risk systems, can largely be interpreted as setting rules for algorithm transparency. Article 13 requires clear and comprehensive documentation explaining the capabilities and limitations of the AI system, including the possibility that it may produce biased results. Notably, Recital 67 discusses in detail the datasets used for training, validation, and testing. Here, the concern with algorithmic bias arises when datasets contain statistical information about individuals or groups, potentially leading to the exclusion of vulnerable groups. To mitigate such risks, these datasets must reflect the specific geographic, contextual, behavioral, or functional environments in which the AI system is intended to be used. Article 10 reinforces this by stating that all potential risks posed by high-risk AI systems that use such datasets must be investigated—especially those that may affect fundamental rights, lead to discrimination, or have any impact (whether negative, neutral, or positive) on individuals’ health and safety. Investigations are critically important when system outputs affect future inputs. For example, in predictive policing scenarios where prior data influences deployment recommendations.
In addition to investigation, preventive and mitigation measures must also be implemented for such AI systems. Furthermore, Article 10(5) emphasizes that high-risk AI providers may only process sensitive categories of personal data if this is strictly necessary for identifying or correcting bias, and if the detection of bias cannot be achieved through other means (Van Bekkum 2025). Such data processing must comply with strict safeguards, including pseudonymization, limited access, security checks, prohibition of sharing with third parties, and timely deletion. Detailed documentation must also justify the necessity of using sensitive data and demonstrate compliance with EU data protection laws. Lastly, Article 14 outlines requirements for effective human oversight of high-risk systems to minimize risks to health, safety, and fundamental rights. In the context of algorithmic bias, Article 14(4)(b) is particularly important as it addresses automation bias. The regulation mandates that oversight mechanisms must help users understand these risks and ensure that individuals are capable of questioning or overriding AI decisions. Article 15 also provides guidance on system design and bias mitigation, stating that high-risk AI systems must be designed to minimize biased feedback loops, for instance, in systems that continue to learn post-deployment. This includes implementing technical and organizational measures to prevent biased outputs from influencing future inputs and ensuring active mitigation of such risks.
Finally, it is also critical to briefly mention the longstanding tradition of European anti-discrimination laws. Unarguably, the GDPR, DSA, and AIA represent the core of the EU’s digital regulatory architecture, which is also built upon a longstanding legal foundation where anti-discrimination serves as a core principle. For instance, directives such as the Racial Equality Directive (2000/43/EC of 29 June 2000), the Employment Equality Directive (2000/78/EC of 27 November 2000), or the Equal Treatment Directive (2006/54/EC of 5 July 2006) have all established robust principles of equal treatment across domains including employment, education, and access to goods and services (Ellis and Watson 2012). These instruments have been interpreted by the Court of Justice of the European Union (CJEU) to reinforce substantive equality, including in contexts involving indirect or systemic discrimination (see Frese 2021). Moreover, all EU member states are bound by the European Convention on Human Rights, which, in Article 14, prohibits discrimination on a broad range of grounds, including race, sex, language, and political or social origin. Though these legal instruments predate the emergence of algorithmic governance, they remain directly relevant today; algorithmic bias often results in precisely the kind of disparate impact that EU law has long sought to prevent.

5. Algorithmic Bias, the Modern Lernaean Hydra?—Dilemmas and Future Regulatory Issues

To use a mythological analogy, the regulation of AI, especially algorithms, is like Heracles’ battle with the Lernaean Hydra, meaning that for every regulatory solution, new and more complex challenges emerge. Among the most persistent is the so-called “black box” effect, an epistemic and technical opacity that defines many AI systems, particularly those based on deep learning architectures (Savage 2022). These models derive patterns from vast datasets, utilizing millions of parameters and layers, which results in outputs whose internal logic is inaccessible, even to their developers (Brożek et al. 2023). Unlike rule-based systems, where decisions can be retraced, machine learning systems often cannot provide a clear causal link between input data and resulting outcomes. This latter issue, however, severely hampers the potential possibility of a comprehensive legal oversight. If we take the stance of the regulator, the lack of interpretability essentially means that identifying discriminatory patterns or challenging flawed reasoning becomes nearly impossible. Though the AIA indeed mandates a comprehensive set of stipulations promoting transparency for high-risk AI systems, these obligations focus more on procedural compliance than substantive interpretability. This approach gives rise to a practical problem: no current EU or national regulation mandates that algorithmic processes be explainable in a manner that a layperson—or sometimes even an auditor—can consistently understand. Furthermore, circling back to the aforementioned underlying issues, trade secret protections are often invoked to avoid disclosing internal model logic, creating a paradox where systems may be lawful on paper but illegible in practice (see Foss-Solbrekk 2023). This paradox fosters an environment in which algorithmic bias can persist unchecked, as even diligent regulators lack the tools to audit or remedy systemic inequalities effectively, resulting in a peculiar legal situation where without independent technical audits, enforceable standards of explainability, and mechanisms to challenge opacity claims, the EU’s stated goals of fairness, accountability, and non-discrimination remain aspirational.
Another obstacle could be enforcement. The AIA, in particular, lays down enforcement rules. Nonetheless, it is questionable whether adequate resources and, most importantly, expertise can be devoted to ensuring proper enforcement. In this respect, it is of grave concern that the European AI Office, which was established by the AIA and which also aims to contribute to the implementation, monitoring and supervision of MI systems and general purpose AI systems and AI governance has been able to deliver mainly administrative results in the last year, while the EU’s AI Board supporting the work of the Office has met a total of twice so far without any substantive results at the time of the writing of this study (EU 2024). The lack of resources and slow bureaucratic processes are even more pronounced at the national level, as most Member States do not yet have a dedicated authority for AI and its monitoring. In this regard, we argue that closing the gap between legal regulation and technical implementation necessitates a deeper engagement with existing methods for mitigating algorithmic bias. In practice, bias can be addressed at three main stages: pre-processing (PRP), in-processing (IP), and post-processing (POP) (Kim and Cho 2022; González-Sendino et al. 2024). Methods include well-developed techniques such as fair representation in the PRP phase, which aims at eliminating embedded biases in datasets while preserving the data’s critical structural and semantic properties, incorporating fairness constraints in the IP stage or PRP threshold-adjusting mechanisms, which allow for balancing outcomes across different groups (Kim and Cho 2022). These techniques are also essential complements to legal instruments, such as the AI Act. Moreover, without technical audits, it is nearly inviable for regulators to evaluate whether a system complies with fairness mandates. Therefore, we suggest that legal responses to algorithmic bias will remain superficial, or worse, symbolic, unless they are accompanied by robust, context-sensitive mitigation strategies developed and validated by computer scientists, ethicists, and domain experts.
Perhaps an even bigger problem, and one that goes beyond legal regulation, is the lack of harmonization in the regulation of algorithms. As the UN Secretary General’s Expert Panel on AI indicated in autumn 2024, there is an “irrefutable” need for global regulation, and market forces must not be allowed to dictate AI development and regulatory boundaries (UN 2024). The report also raises concerns that, without proper global governance, the benefits of AI may be limited to a select few nations and organizations, which could exacerbate the already existing digital divide and inequalities (UN 2024).
Lastly, it is fundamental to question whether there could be an “ideal” framework. Though we cannot answer this query with absolute certainty, we claim that an optimal legislative framework to regulate algorithmic bias would integrate the European Union’s strong emphasis on fundamental rights and transparency with the United States’ flexibility and innovation-driven pragmatism. By this, we mean that from the EU model, the framework should retain robust ex ante safeguards (such as risk classification, mandatory documentation, and user rights to explanation), ensuring that systems are not only compliant but also accountable by design. On the other hand, the U.S. approach should adopt context-sensitive standards that allow for technological experimentation while ensuring outcomes do not disproportionately harm protected groups. Furthermore, such legislation must also provide precise enforcement mechanisms, including independent audits and meaningful sanctions, coupled with incentives for private-sector innovation in fairness-enhancing technologies. Most importantly, however, the ideal regulation would be adaptive, capable of responding to emerging AI applications without becoming obsolete or overly rigid.
From a more theoretical perspective, the ideal framework is also to define equality more precisely. This question emerges from the fact that while legal frameworks often—innately—assume that fairness can be achieved through neutral, data-driven processing, this assumption collapses when viewed against entrenched social and economic inequalities. A striking example is the widely cited case of discriminatory healthcare algorithms in the United States that predicted the need for medical intervention based on past healthcare expenditures often favoring one group to other, more marginalized groups (Obermeyer et al. 2019). Furthermore, a recent precedent must also be cited as the contested nature of equality in algorithmic design is mirrored in recent legal developments. A particularly illustrative case is the U.S. Supreme Court’s 2023 decision in Students for Fair Admissions v. Harvard and its companion case against the University of North Carolina (UNC), in which the Court ruled that race-conscious admissions policies violated the Equal Protection Clause of the Fourteenth Amendment. In the 6–2 (v. Harvard) and 6–3 (v. UNC) respective decisions, the majority held that such policies lacked clear, measurable objectives and imposed unjustified burdens on certain applicants, particularly Asian Americans. Although the case concerned higher education admission procedures rather than automated systems, the decisions highlight a growing judicial preference for formal, colorblind interpretations of equality over substantive, redistributive approaches. This legislative trend also has direct implications for algorithmic fairness. Given that legal norms increasingly prioritize neutrality over equity, developers and regulators may find themselves constrained in implementing algorithmic models that deliberately account for socioeconomic inequality, resulting in further questions about the scope and legitimacy of fairness interventions in automated systems. This dilemma leads to a deeper issue, which appears to be embedded in the question of whether algorithmic equality or equity is an “objective” criterion or is, in fact, a proxy for structural privilege. Thus, algorithmic equality may not be a mere empirical problem to be corrected with better data but much rather a normative construct that requires deliberate intervention. Accordingly, regulatory frameworks must go beyond requiring explainability or transparency, and grapple with the ethical decisions inherent in variable selection, weight assignment, and outcome optimization.

6. Conclusions

Continuing the aforementioned ancient metaphor, Heracles needed Iolaus, who (albeit with some divine help) found the right catch on the Hydra that allowed the hero of Thebes to defeat the monster. In regulating algorithmic bias, agreeing with the UN report, to have a more comprehensive answer to algorithmic bias, a cooperative regulation at the global level is needed. This cooperative work will involve much more than a joint effort by individual states; it shall include the expertise and works NGOs, researchers from various fields of Academia, various advocacy organizations, economic actors active in AI and, of course, a voice for the marginalized groups most affected by the coded exclusion of algorithms.
In this paper, we examined the issue of algorithmic bias, with a particular focus on its conceptual basis and the current state of regulation. The main legal orientations are outlined, with a particular focus on European (EU) rules. The main finding of the research—and an important call—is the urgent need for a harmonized global regulatory framework. Current regional and national efforts, while commendable, are not sufficient to address the cross-border nature of the development and application of AI to prevent digital authoritarianism. Future research is recommended to present technical solutions to algorithmic bias, and regarding legal issues, important results could be obtained from regulatory initiatives and solutions that anticipate domestic trends or even those outside the EU and the US.

Author Contributions

Conceptualization, G.F.L.; methodology, G.F.L.; investigation, G.F.L. and G.G.; resources, G.G.; original draft preparation, G.F.L.; writing—review and editing, G.F.L. and G.G.; supervision, G.G.; project administration, G.F.L. and G.G.; funding acquisition, G.G. All authors have read and agreed to the published version of the manuscript.

Funding

The research was conducted with the financial support of a fellowship from the Center for Advanced Internet Studies (CAIS). The research was also supported by the EKÖP-24-3 University Research Scholarship Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund and by the ADVANCED_24 funding scheme of the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
EOExecutive Order
GDPRGeneral Data Protection Regulation, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
DSADigital Services Act, Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act)
AIAArtificial Intelligence Act, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

References

  1. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. ProPublica. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (accessed on 10 April 2025).
  2. Barbosa, Sandra, and Sara Félix. 2021. Algorithms and the GDPR: An Analysis of Article 22. In Anuário Da Proteção De Dados 2021 [Yearbook of Data Protection 2021]. Lisbon: NOVA School of Law, pp. 67–93. [Google Scholar]
  3. BBC News. 2018. Amazon Scrapped ‘Sexist AI’ Tool. October 10. Available online: https://www.bbc.com/news/technology-45809919 (accessed on 10 April 2025).
  4. Bouchagiar, Georgios. 2024. Is Europe Prepared for Risk Assessment Technologies in Criminal Justice? Lessons From the US Experience. New Journal of European Criminal Law 15: 72–98. [Google Scholar] [CrossRef]
  5. Brennan, Tim, and William Dieterich. 2017. Correctional Offender Management Profiles for Alternative Sanctions (COMPAS). In Handbook of Recidivism Risk/Needs Assessment Tools. Hoboken: Wiley, pp. 49–75. [Google Scholar] [CrossRef]
  6. Brennan, Tim, William Dieterich, and Beate Ehret. 2008. Evaluating the Predictive Validity of the Compas Risk and Needs Assessment System. Criminal Justice and Behavior 36: 21–40. [Google Scholar] [CrossRef]
  7. Brożek, Bartosz, Michał Furman, Marek Jakubiec, and Bartłomiej Kucharzyk. 2023. The Black Box Problem Revisited. Real and Imaginary Challenges for Automated Legal Decision Making. Artificial Intelligence and Law 32: 427–40. [Google Scholar] [CrossRef]
  8. Bygrave, Lee A. 2020. Article 22. In The EU General Data Protection Regulation (GDPR): A Commentary. Oxford: Oxford University Press. [Google Scholar]
  9. CBS News. 2024. Colorado Is First in Nation to Pass Legislation Tackling Threat of AI Bias in Pivotal Decisions. CBS News, May 24. Available online: https://www.cbsnews.com/news/ai-colorado-law-algorithms-bias-antidiscrimination/ (accessed on 10 April 2025).
  10. Chen, Zhisheng. 2023. Ethics and Discrimination in Artificial Intelligence-enabled Recruitment Practices. Humanities and Social Sciences Communications 10: 567. [Google Scholar] [CrossRef]
  11. Costa, Carlos J., Manuela Aparicio, Sofia Aparicio, and Joao Tiago Aparicio. 2024. The Democratization of Artificial Intelligence: Theoretical Framework. Applied Sciences 14: 8236. [Google Scholar] [CrossRef]
  12. Da Silveira, Julia Barroso, and Ellen Alves Lima. 2024. Racial Biases in AIs and Gemini’s Inability to Write Narratives About Black People. Emerging Media 2: 277–87. [Google Scholar] [CrossRef]
  13. Determann, Lothar, and Jonathan Tam. 2020. The California Privacy Rights Act of 2020: A Broad and Complex Data Processing Regulation That Applies to Businesses Worldwide. Journal of Data Protection & Privacy 4: 7. [Google Scholar] [CrossRef]
  14. Dodge, Jonathan, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, and Casey Dugan. 2019. Explaining models. Paper presented at 24th International Conference on Intelligent User Interfaces, Marina Del Ray, CA, USA, March 17–20; pp. 275–85. [Google Scholar]
  15. Ellis, Evelyn, and Philippa Watson. 2012. EU Anti-Discrimination Law. Oxford: Oxford Academic. [Google Scholar] [CrossRef]
  16. Engel, Christoph, Lorenz Linhardt, and Marcel Schubert. 2024. Code is law: How COMPAS affects the way the judiciary handles the risk of recidivism. Artificial Intelligence and Law 33: 383–404. [Google Scholar] [CrossRef]
  17. EU. 2024. European AI Office. EU. Available online: https://digital-strategy.ec.europa.eu/en/policies/ai-office (accessed on 10 April 2025).
  18. Fazelpour, Sina, and David Danks. 2021. Algorithmic Bias: Senses, Sources, Solutions. Philosophy Compass 16: e12760. [Google Scholar] [CrossRef]
  19. Filippi, Christopher G., Joel M. Stein, Zihao Wang, Spyridon Bakas, Yichuan Liu, Peter D. Chang, Yvonne Lui, Christopher Paul Hess, Daniel Paul Barboriak, Adam Eugene Flanders, and et al. 2023. Ethical Considerations and Fairness in the Use of Artificial Intelligence for Neuroradiology. American Journal of Neuroradiology 44: 1242–48. [Google Scholar] [CrossRef]
  20. Flores, Anthony W., Kristin Bechtel, and Christopher T. Lowenkamp. 2016. False Positives, False Negatives, and False Analyses: A Rejoinder to ‘Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks’. Federal Probation 80: 38. Available online: https://www.uscourts.gov/about-federal-courts/probation-and-pretrial-services/federal-probation-journal/2016/09/false-positives-false-negatives-and-false-analyses-a-rejoinder-machine-bias-theres-software-used (accessed on 10 April 2025).
  21. Foss-Solbrekk, Katarina. 2023. Searchlights Across the Black Box: Trade Secrecy Versus Access to Information. Computer Law & Security Review 50: 105811. [Google Scholar] [CrossRef]
  22. Frese, Amalie. 2021. Anti-discrimination Case Law of the Court of Justice of the European Union Before and After the Economic Crisis. The Law and Development Review 15: 357–79. [Google Scholar] [CrossRef]
  23. Golpayegani, Delaram, Harshvardhan J. Pandit, and Dave Lewis. 2023. To Be High-Risk, or Not to Be—Semantic Specifications and Implications of the AI Act’s High-Risk AI Applications and Harmonised Standards. Paper presented at 2022 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, June 12–15; pp. 905–15. [Google Scholar] [CrossRef]
  24. González-Sendino, Rubén, Emilio Serrano, and Javier Bajo. 2024. Mitigating Bias in Artificial Intelligence: Fair Data Generation via Causal Models for Transparent and Explainable Decision-making. Future Generation Computer Systems 155: 384–401. [Google Scholar] [CrossRef]
  25. Hacker, Philipp. 2018. Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies Against Algorithmic Discrimination Under EU Law. Common Market Law Review 55: 1143–85. [Google Scholar] [CrossRef]
  26. Hofeditz, Lennart, Milad Mirbabaie, Audrey Luther, Riccarda Mauth, and Ina Rentemeister. 2022. Ethics Guidelines for Using AI-based Algorithms in Recruiting: Learnings from a Systematic Literature Review. Paper presented at Annual Hawaii International Conference on System Sciences/the Annual Hawaii International Conference on System Sciences, Virtual, January 4–7. [Google Scholar] [CrossRef]
  27. Hooker, Sara. 2021. Moving Beyond ‘Algorithmic Bias Is a Data Problem’. Patterns 2: 100241. [Google Scholar] [CrossRef]
  28. Hsu, Jeremy. 2020. Can AI Hiring Systems Be Made Antiracist? Makers and Users of AI-assisted Recruiting Software Reexamine the Tools’ Development and How They’re Used—[News]. IEEE Spectrum 57: 9–11. [Google Scholar] [CrossRef]
  29. Husovec, Martin. 2024. The Digital Services Act’s Red Line: What the Commission Can and Cannot Do About Disinformation. Journal of Media Law 16: 47–56. [Google Scholar] [CrossRef]
  30. Imran, Muhammad, and Norah Almusharraf. 2024. Google Gemini as a Next Generation AI Educational Tool: A Review of Emerging Educational Technology. Smart Learning Environments 11: 22. [Google Scholar] [CrossRef]
  31. Johnson, Gabbrielle M. 2020. Algorithmic Bias: On the Implicit Biases of Social Technology. Synthese 198: 9941–61. [Google Scholar] [CrossRef]
  32. Jones, Corinne. 2023. Search Engine Discourse Analysis: How ‘Shadowban’ Affects Policy. Information Communication & Society 27: 1025–42. [Google Scholar] [CrossRef]
  33. Kim, Jin-Young, and Sung-Bae Cho. 2022. An Information Theoretic Approach to Reducing Algorithmic Bias for Machine Learning. Neurocomputing 500: 26–38. [Google Scholar] [CrossRef]
  34. Kirkpatrick, Keith. 2016. Battling Algorithmic Bias. Communications of the ACM 59: 16–17. [Google Scholar] [CrossRef]
  35. Kordzadeh, Nima, and Maryam Ghasemaghaei. 2022. Algorithmic Bias: Review, Synthesis, and Future Research Directions. European Journal of Information Systems 31: 388–409. [Google Scholar] [CrossRef]
  36. Koshiyama, Adriano, Airlie Hilliard, Emre Kazim, and Stephan Ledain. 2022. Is It Enough to Audit Recruitment Algorithms for Bias?—OECD.AI. September 27. Available online: https://oecd.ai/en/wonk/audit-recruitment-algorithms-for-bias (accessed on 10 April 2025).
  37. Kusche, Isabel. 2024. Possible Harms of Artificial Intelligence and the EU AI Act: Fundamental Rights and Risk. Journal of Risk Research. [Google Scholar] [CrossRef]
  38. Lagioia, Francesca, Riccardo Rovatti, and Giovanni Sartor. 2022. Algorithmic Fairness Through Group Parities? The Case of COMPAS-SAPMOC. AI & Society 38: 459–78. [Google Scholar] [CrossRef]
  39. Laux, Johann, Sandra Wachter, and Brent Mittelstadt. 2021. Taming the Few: Platform Regulation, Independent Audits, and the Risks of Capture Created by the DMA and DSA. Computer Law & Security Review 43: 105613. [Google Scholar] [CrossRef]
  40. Lendvai, Gergely Ferenc. 2024. Taming the Titans?—Digital Constitutionalism and the Digital Services Act. ESSACHESS—Journal for Communication Studies 17: 169–84. [Google Scholar] [CrossRef]
  41. Lendvai, Gergely Ferenc, and Gergely Gosztonyi. 2024. Deepfake Y Desinformación –¿Qué Puede Hacer El Derecho Frente a Las Noticias Falsas Creadas Por Deepfake? IDP Revista De Internet Derecho Y Política 41: 1–13. [Google Scholar] [CrossRef]
  42. Liesenfeld, Anna. 2024. The Legal Significance of Independent Research Based on Article 40 DSA for the Management of Systemic Risks in the Digital Services Act. European Journal of Risk Regulation 16: 184–96. [Google Scholar] [CrossRef]
  43. Lippert-Rasmussen, Kasper. 2024. Algorithmic and Non-Algorithmic Fairness: Should We Revise Our View of the Latter Given Our View of the Former? Law And Philosophy 44: 155–79. [Google Scholar] [CrossRef]
  44. MacCarthy, Mark. 2018. Standards of Fairness for Disparate Impact Assessment of Big Data Algorithms. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  45. Mac Síthigh, Daithí, and Mathias Siems. 2019. The Chinese Social Credit System: A Model for Other Countries? Modern Law Review 82: 1034–71. [Google Scholar] [CrossRef]
  46. Malgieri, Gianclaudio. 2019. Automated Decision-making in the EU Member States: The Right to Explanation and Other ‘Suitable Safeguards’ in the National Legislations. Computer Law & Security Review 35: 105327. [Google Scholar] [CrossRef]
  47. Novelli, Claudio, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo, and Luciano Floridi. 2023. Taking AI Risks Seriously: A New Assessment Model for the AI Act. AI & Society 39: 2493–97. [Google Scholar] [CrossRef]
  48. Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science 366: 447–53. [Google Scholar] [CrossRef] [PubMed]
  49. O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Allen Lane. Available online: https://ci.nii.ac.jp/ncid/BB22310261 (accessed on 10 April 2025).
  50. Purves, Duncan, and Jeremy Davis. 2023. Should Algorithms That Predict Recidivism Have Access to Race? American Philosophical Quarterly 60: 205–20. [Google Scholar] [CrossRef]
  51. Quintais, João Pedro, Naomi Appelman, and Ronan Ó. Fathaigh. 2023. Using Terms and Conditions to Apply Fundamental Rights to Content Moderation. German Law Journal 24: 881–911. [Google Scholar] [CrossRef]
  52. Ratwani, Raj M., Karey Sutton, and Jessica E. Galarraga. 2024. Addressing AI Algorithmic Bias in Health Care. JAMA 332: 1051. [Google Scholar] [CrossRef] [PubMed]
  53. Robert, Lionel P., Casey Pierce, Liz Marquis, Sangmi Kim, and Rasha Alahmad. 2020. Designing Fair AI for Managing Employees in Organizations: A Review, Critique, and Design Agenda. Human-Computer Interaction 35: 545–75. [Google Scholar] [CrossRef]
  54. Saeidnia, Hamid Reza. 2023. Welcome to the Gemini Era: Google DeepMind and the Information Industry. Library Hi Tech News, ahead of print. [Google Scholar] [CrossRef]
  55. Sancho, Diana. 2020. Automated Decision-Making Under Article 22 GDPR. In Algorithm and Laws. Cambridge: Cambridge University Press, pp. 136–56. [Google Scholar] [CrossRef]
  56. Savage, Neil. 2022. Breaking Into the Black Box of Artificial Intelligence. In Nature Outlook: Robotics and Artificial Intelligence. Available online: https://www.nature.com/articles/d41586-022-00858-1 (accessed on 10 April 2025).
  57. Shin, Donghee, and Emily Y. Shin. 2023. Data’s Impact on Algorithmic Bias. Computer 56: 90–94. [Google Scholar] [CrossRef]
  58. Singh, Jay P. 2013. Predictive Validity Performance Indicators in Violence Risk Assessment: A Methodological Primer. Behavioral Sciences & the Law 31: 8–22. [Google Scholar] [CrossRef]
  59. Thouvenin, Florent, Alfred Früh, and Simon Henseler. 2022. Article 22 GDPR on Automated Individual Decision-Making: Prohibition or Data Subject Right? European Data Protection Law Review 8: 183–98. [Google Scholar] [CrossRef]
  60. Telegraph. 2024. From Black Nazis to female Popes and American Indian Vikings: How AI went ‘woke’. The Telegraph. Available online: https://www.telegraph.co.uk/news/2024/02/23/google-gemini-ai-images-wrong-woke/ (accessed on 10 April 2025).
  61. Todorova, Christina, George Sharkov, Huib Aldewereld, Stefan Leijnen, Alireza Dehghani, Stefano Marrone, Carlo Sansone, Maurice Lynch, John Pugh, Tarry Singh, and et al. 2023. The European AI Tango: Balancing Regulation Innovation and Competitiveness. Paper Presented at Conference on Human Centered Artificial Intelligence—Education and Practice, HCAIep 2023, Dublin, Ireland, December 14–15. [Google Scholar] [CrossRef]
  62. Turillazzi, Aina, Mariarosaria Taddeo, Luciano Floridi, and Federico Casolari. 2023. The Digital Services Act: An Analysis of Its Ethical, Legal, and Social Implications. Law Innovation and Technology 15: 83–106. [Google Scholar] [CrossRef]
  63. UN. 2024. Governing AI for Humanity: Final Report. New York: United Nations. [Google Scholar]
  64. Van Bekkum, Marvin. 2025. Using Sensitive Data to De-bias AI Systems: Article 10(5) of the EU AI Act. Computer Law & Security Review 56: 106115. [Google Scholar] [CrossRef]
  65. Veale, Michael, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Paper presented at 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, April 21–26. [Google Scholar] [CrossRef]
  66. Viterbo, Francesco Giacomo. 2019. The ‘User-Centric’ and ‘Tailor-Made’ Approach of the GDPR Through the Principles It Lays down. Italian Law Journal 5: 631–72. [Google Scholar]
  67. Walter, Yoshija. 2024. Managing the Race to the Moon: Global Policy and Governance in Artificial Intelligence regulation—A Contemporary Overview and an Analysis of Socioeconomic Consequences. Discover Artificial Intelligence 4: 14. [Google Scholar] [CrossRef]
  68. Wang, Wenxuan, Haonan Bai, Jen-Tse Huang, Yuxuan Wan, Youliang Yuan, Haoyi Qiu, Nanyun Peng, and Michael Lyu. 2024a. New Job, New Gender? Measuring the Social Bias in Image Generation Models. Paper presented at MM 2024—Proceedings of the 32nd ACM International Conference on Multimedia, Melbourne, VIC, Australia, October 28–November 1; pp. 3781–89. [Google Scholar] [CrossRef]
  69. Wang, Xukang, Ying Cheng Wu, Xueliang Ji, and Hongpeng Fu. 2024b. Algorithmic Discrimination: Examining Its Types and Regulatory Measures with Emphasis on US Legal Practices. Frontiers in Artificial Intelligence 7: 1320277. [Google Scholar] [CrossRef]
  70. White House. 2025. Initial Rescissions of Harmful Executive Orders and Actions. Available online: https://www.whitehouse.gov/presidential-actions/2025/01/initial-rescissions-of-harmful-executive-orders-and-actions/ (accessed on 10 April 2025).
  71. Zhou, Nengfeng, Zach Zhang, Vijayan N. Nair, Harsh Singhal, and Jie Chen. 2022. Bias, Fairness and Accountability with Artificial Intelligence and Machine Learning Algorithms. International Statistical Review 90: 468–80. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lendvai, G.F.; Gosztonyi, G. Algorithmic Bias as a Core Legal Dilemma in the Age of Artificial Intelligence: Conceptual Basis and the Current State of Regulation. Laws 2025, 14, 41. https://doi.org/10.3390/laws14030041

AMA Style

Lendvai GF, Gosztonyi G. Algorithmic Bias as a Core Legal Dilemma in the Age of Artificial Intelligence: Conceptual Basis and the Current State of Regulation. Laws. 2025; 14(3):41. https://doi.org/10.3390/laws14030041

Chicago/Turabian Style

Lendvai, Gergely Ferenc, and Gergely Gosztonyi. 2025. "Algorithmic Bias as a Core Legal Dilemma in the Age of Artificial Intelligence: Conceptual Basis and the Current State of Regulation" Laws 14, no. 3: 41. https://doi.org/10.3390/laws14030041

APA Style

Lendvai, G. F., & Gosztonyi, G. (2025). Algorithmic Bias as a Core Legal Dilemma in the Age of Artificial Intelligence: Conceptual Basis and the Current State of Regulation. Laws, 14(3), 41. https://doi.org/10.3390/laws14030041

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop