Algorithmic Bias as a Core Legal Dilemma in the Age of Artificial Intelligence: Conceptual Basis and the Current State of Regulation
Round 1
Reviewer 1 Report
Comments and Suggestions for Authors
The article conducts an important comparison between the legislation (or lack of) in the US with the relevant legislation around AI in the European union. It would be interesting to add a section that deals with the underlaying reasons for this difference.
It would also be beneficial to discuss the ideal legislation given the framework discussed in the US and the EU.
I am not sure that the Gemini problematic images were due to training data and not just hallucinations, not necessarily resulting from the training data.
Author Response
Thank you very much for these important suggestions and questions. We have addressed all queries in detail in the manuscript. Firstly, we added a comprehensive outline about the underlying reasons between the US and EU frameworks before the EU segment (sec 4). Furthermore, we have also added a short section on what we think is an ideal legislation at the end of the discussion. Lastly, we have addressed the Gemini issue; we mention specifically that it was a bias on the part of the data trainers. Thank you again!
Reviewer 2 Report
Comments and Suggestions for Authors
Thank you for the opportunity to review this manuscript on the legal governance of algorithmic bias. The topic is both timely and significant, but I have identified several fundamental flaws that must be addressed before this work can be considered for publication. My detailed comments follow.
1. The author devotes considerable space to describing the specific provisions of EU regulations but offers no substantive analysis of their effectiveness, limitations, or real-world implementation. Key concepts such as the “black-box effect” are treated superficially. Opacity is mentioned in broad terms without a deep dive into the concrete technical and legal challenges involved.
2. The manuscript focuses solely on federal anti-discrimination laws and executive orders yet omits any discussion of the widely varying state-level AI and data-privacy statutes. Given that U.S. enterprises are most affected by the strictest state regulations, an analysis confined to the federal level is clearly inadequate.
3. Although the author advocates for the establishment of unified standards on a global scale, there is no assessment of the political, economic, and cultural obstacles that different jurisdictions would present. While most legal scholars favor an EU-style comprehensive AI framework, such an approach frequently draws criticism from leading U.S. AI firms, perhaps explaining the author’s reluctance to examine state-level divergences.
4. Despite claiming a comparative-law perspective, the paper merely lists and describes the rules in each jurisdiction rather than conducting a genuine comparison. In the AI field, the greatest tension lies in the fact that Europe supplies much of the theoretical basis while practical leadership resides with U.S. and East Asian companies. A truly cross-jurisdictional comparison is therefore essential but missing here.
5. The article remains at the level of legal-system design theory and fails to engage with existing technical approaches for mitigating algorithmic bias. This disconnect between theory and practice, not only geographically but also methodologically, is a fundamental weakness that undermines the paper’s persuasiveness. The author should integrate a discussion of concrete mitigation techniques to strengthen their argument.
6. There are numerous stylistic issues, including unnecessary repetition of the algorithm-bias concept and overly long, convoluted sentences that impair readability. Additionally, the manuscript contains several English language errors, most glaringly the typo “Seocndly,” and should be thoroughly proofread.
Author Response
We thank the reviewer for their detailed and constructive feedback. Based on the concerns raised, we have extensively revised version of the manuscript and we have carefully addressed all substantive suggestions raised in the review and made corresponding changes throughout the text.
In response to the observation that the discussion of EU regulation was overly descriptive and lacked analytical depth, we have substantially expanded Section 5. The revised section now provides a detailed legal and technical analysis of algorithmic opacity, examining why deep learning models resist interpretability and how this limits the enforceability of rights. We also explored here the tensions between transparency obligations and trade secrecy claims, arguing that without enforceable explainability standards or independent audits, key regulatory goals risk becoming aspirational rather than actionable. To rectify the omission of state-level U.S. regulation, we have extended Section 3 to include leading examples of state legislation that directly impact algorithmic governance. Agreeing with the review, we also recognize the importance of situating the call for global standards within real-world political and economic constraints. The revised conclusion incorporates an expanded discussion of the jurisdictional, cultural, and industrial barriers to regulatory harmonization. We now explicitly acknowledge the opposition of leading U.S. technology firms to EU-style regulation, as well as the deeper structural asymmetries between jurisdictions that prioritize innovation and those that prioritize precaution.
Regarding Comment 4, the comparative-law dimension of the paper has been substantially strengthened. Section 4 has been revised to move beyond parallel exposition toward an integrated analysis of regulatory philosophy, institutional design, and enforcement capacity across the EU and U.S. models.
We have also responded to the reviewer’s concern regarding the absence of technical content by adding a detailed discussion of mitigation strategies currently used to address algorithmic bias. Though our goal with the paper is narrowed down to focus on legal issues, in Section 5, we outline key techniques used at the pre-processing, in-processing, and post-processing stages of model development.
We have also rechecked the language of the paper.
Thank you again for these valuable comments!
Reviewer 3 Report
Comments and Suggestions for Authors
The paper lacks Research Hypotheses. The aim of this paper is defined as: The aim of this paper is to 100 conceptualize algorithmic bias and present current legal responses
And while the part concerning US law is discussed quite interestingly from the perspective of this goal and the critique of the possibility of achieving it, the EU part is mainly a presentation of legal grounds of the AI ​​area itself, without indicating how the proposed solutions may affect the solutions to the problem of algorithmic bias.
Author Response
We thank the reviewer for their thoughtful comments and helpful suggestions. We have revised the manuscript to more clearly articulate the paper’s objectives (last section of the introduction) and expanded the section on EU law to go beyond a presentation of legal instruments. The revised text now includes a critical evaluation of the potential and limitations in addressing algorithmic bias, with specific attention to enforceability, opacity, and normative design.
Reviewer 4 Report
Comments and Suggestions for Authors
This article critically examines the challenges of regulating algorithmic bias through legal frameworks. The author cites several well-known examples, such as COMPAS, Amazon’s AI-based recruiting system, and generative AI models like Gemini, which have demonstrated discrepancies between generated images of American Indians or Viking warriors and historical reality. In addition to case illustrations, the article offers a comparative analysis of U.S. and EU approaches to the regulation of algorithmic bias, highlighting difficulties in governance stemming from phenomena such as the “black-box” nature of AI systems and the practical challenges of enforcement. Overall, the article is logically structured and supported by extensive references.
However, Section 5 would benefit from further elaboration. One central question concerns the very definition of “equality.” The article could consider whether an algorithm trained on all available global data would indeed be unbiased. This assumption is problematic, given the inherently unequal distribution of socioeconomic power in society. As such, the notion of a bias-free dataset is itself questionable.
For instance, a study in the United States found that algorithms predicting healthcare needs—based on individuals’ medical expenditures—concluded that white patients were more in need of assistance, merely because they spent more on healthcare than Black patients. This result, however, failed to consider the underlying economic disparity: white patients, on average, had greater financial resources to allocate to healthcare. This raises the critical issue of whether “equality” in algorithmic design is a normative construct rather than an empirical reality. Perhaps it is precisely because social structures are unequal that algorithm developers must deliberately equalize certain variables during training. The article might expand by asking: Which variables should be excluded from algorithmic consideration to achieve fairness?
In the U.S., the legal debate surrounding the meaning of “equality” has been prominently illustrated by recent developments in affirmative action. On June 29, 2023, the U.S. Supreme Court ruled that Harvard University and the University of North Carolina’s consideration of race in their admissions processes violated the Equal Protection Clause of the Fourteenth Amendment. The case was brought by Students for Fair Admissions (SFFA), alleging discrimination against Asian-American applicants. The Court, in a 6–3 decision, found that the policies lacked a sufficiently defined objective and inevitably imposed burdens on applicants.
Secondly, in its comparative legal analysis, the article focuses primarily on digital-era regulations but neglects to discuss pre-existing anti-discrimination frameworks that remain relevant. For example, in the European Union, directives such as the Racial Equality Directive, the Employment Equality Directive, and the Equal Treatment Directive provide robust legal protections against discrimination. The Court of Justice of the European Union has interpreted these instruments in ways that directly impact the rights of minority groups. Furthermore, all EU member states are also parties to the European Convention on Human Rights, whose Article 14 prohibits discrimination on grounds including sex, race, colour, language, religion, political opinion, national or social origin, and membership in a national minority.
Third, the article rightly identifies enforcement as a major obstacle to regulating algorithmic bias. In this regard, the EU has emphasized the internal governance responsibilities of AI deployers. The underlying rationale appears to be a recognition that bias-free AI is an aspirational ideal rather than an attainable standard. Thus, by establishing ex ante duties of care for AI developers and deployers—particularly concerning risk assessment, documentation, and oversight—regulations aim to shift liability based on whether these actors have fulfilled their obligations. When compliance with such governance structures is demonstrated, it may serve as a defense against claims of negligence.
Author Response
We sincerely thank the reviewer for their thoughtful and well-structured feedback. All suggestions have been carefully considered and integrated into the revised manuscript. Section 5 has been expanded to reflect more critically on the normative dimensions of equality in algorithmic design, including the feasibility of fairness in light of systemic social disparities. Following the third point, we have also incorporated reference to the 2023 U.S. Supreme Court ruling on affirmative action as a contemporary legal lens on the evolving meaning of equality. We agree that anti-disc. laws are to be mentioned as well, therefore, the EU section now includes relevant pre-digital anti-discrimination directives and Article 14 of the European Convention on Human Rights, providing deeper context for the current regulatory landscape. We highly appreciate the reviewer’s insights, which have significantly strengthened the analytical depth of the paper.
Round 2
Reviewer 4 Report
Comments and Suggestions for Authors
The revised draft has been well-written. I have no further question and recommend to publish this paper.