The European Commission’s Proposal for an Artiﬁcial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)

: On 21 April 2021, the European Commission presented its long-awaited proposal for a Regulation “laying down harmonized rules on Artiﬁcial Intelligence”, the so-called “Artiﬁcial Intelligence Act” (AIA). This article takes a critical look at the proposed regulation. After an introduction (1), the paper analyzes the unclear preemptive effect of the AIA and EU competences (2), the scope of application (3), the prohibited uses of Artiﬁcial Intelligence (AI) (4), the provisions on high-risk AI systems (5), the obligations of providers and users (6), the requirements for AI systems with limited risks (7), the enforcement system (8), the relationship of the AIA with the existing legal framework (9), and the regulatory gaps (10). The last section draws some ﬁnal conclusions (11).


Introduction
Artificial intelligence (AI) systems based on machine learning (ML) and other techniques have the potential to improve our lives as well as the overall economic and societal welfare. They can contribute to better healthcare services, safer and cleaner transport systems, better working conditions, higher productivity, and new innovative products, services, and supply chains. AI systems can also benefit the public sector in a number of ways, for example, by automating repetitive and time-consuming tasks, or by providing public agencies with more accurate and detailed information, forecasts, and predictions, which in turn, might lead to personalized public services tailored to individual circumstances. AI-powered systems may even help to respond to key global challenges, such as climate change and the novel coronavirus pandemic.
However, as with every disruptive technology, AI systems come not only with benefits but also with substantial risks, raising a broad variety of legal and ethical challenges. AI systems have the potential to unpredictably harm people's life, health, and property. They can also affect fundamental values on which western societies are founded, leading to breaches of fundamental rights of people, including the rights to human dignity and self-determination, privacy and personal data protection, freedom of expression and of assembly, non-discrimination, or the right to an effective judicial remedy and a fair trial, as well as consumer protection [1][2][3].
Against this backdrop, we welcome that the European Commission presented on 21 April 2021, the proposal for a Regulation "laying down harmonized rules on Artificial Intelligence", the so-called "Artificial Intelligence Act (hereinafter: AIA) [4]. We particularly appreciate that the AIA follows a risk-based approach by imposing regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety.
However, a closer analysis shows that the proposed AIA may need improvement in many areas. This position paper is to be understood as a preliminary assessment of the AIA in order to provide a meaningful basis for further discussion.

Unclear Preemptive Effect of the AIA and EU Competences
The AIA does not clearly state as to what extent the Member States may deviate from the substantive requirements contained therein, i.e., whether it establishes a framework of full harmonization or whether it is one of minimum harmonization that allows Member States to maintain or introduce provisions diverging from those laid down in the AIA, including more stringent provisions to ensure a different level of protection (on the difference between full/maximum harmonization and minimum harmonization: [5] (pp. 269-270), [6] (pp. 47-83), [7], [8] (pp. 133-134)). For example, do Member States still have the competence to ban certain AI systems beyond the prohibitions of Art. 5 AIA, e.g., biometric systems or social scoring by private companies?
Regarding the legislative competences, the proposed regulation follows the European Commission's tendency to use Art. 114 TFEU as a general competence for the regulation of commercial law. Whether this legal basis can be used for the AIA is doubtful in light of the Tobacco Advertising case-law [9,10], especially regarding the prohibition of certain systems used by state authorities (cf. Art. 5(1)(c) and (d) AIA). How does the prohibition of certain AI systems used by public authorities contribute to removing barriers to trade in the internal market? Likewise, it is questionable whether these prohibitions can be based on Art. 16(2) TFEU, since these bans do not directly aim at the protection of personal data but at other fundamental rights and democratic values. Additionally, the exceptions in Art. 5(1)(d)(iii) AIA address detailed requirements for administrative procedural rules including certain groups of cases and the requirement of prior authorization by a judicial or independent administrative authority. These kinds of procedural obligations concerning state security, risk prevention, and prosecution are usually part of the administrative police law or the criminal procedure code of the Member States [11] (p. 143).

Overly Broad Definition of "AI Systems"
The proposal regulates "AI systems". Besides the question of what should be the difference between "AI" [4] (p. 11; recitals (28), (38), (39) (40), and (47) AIA) and "AI systems", the overly broad substantive scope of the AIA seems problematic [12] (pp. 215-216). Art. 3(1) AIA in conjunction with Annex I covers almost every computer program. Such a broad approach may lead to legal uncertainty for developers, operators, and users of AI systems [13] (p. 280). Many associate the term "artificial intelligence" primarily with machine learning, and not with simple automation processes in which pre-programmed rules are executed according to logic-based reasoning.
The AIA also goes beyond the necessary degree of regulation, at least in some cases. On the one hand, a wide definition of "AI systems" may be justified in light of the prohibited AI practices delineated in Art. 5 AIA to offset the threats posed by different kinds of software to the fundamental rights of individuals. Indeed, it seems to make little difference to the rights of affected citizens whether the banned practices (subliminal manipulation, exploitation of vulnerabilities, social scoring, or remote biometric identification) are enabled by machine learning or logic-based reasoning. On the other hand, such a broad definition is too wide when it comes to high-risk AI systems. The mandatory requirements envisaged for these systems in Title III, Chapter 2 are based on the observation that a number of fundamental rights are adversely affected, in particular, by the special characteristics of ML, such as opacity, complexity, dependency on data, autonomous behavior [14] (p. 11 at 3.5.). Since these characteristics are either not or only partly present in simple (logic based) algorithms, the broad definition of AI potentially leads to an overregulation.

Unclear Scope with Regard to AI Components
The AIA also does not clarify as to how various components of AI systems should be treated which could include either pre-trained AI systems from different manufacturers forming part of the same AI system or components of one and the very system that are not released independently. Therefore, it should be clarified whether separate components of AI systems would be individually required to conform to the AIA and what would be the implications for responsibilities when these components are not compliant.

Unclear Scope with Regard to Academic Research and Open-Source Software (OSS)
Another regulatory flaw of the AIA is that, unlike Art. 89 GDPR [15], it does not provide for any exception for research purposes. Therefore, in situations where the researchers collaborate with the industry and publish their model for academic purposes, they may run the risk of being regarded as providers who "develop an AI system" with a view of "putting it into service" (Art. 3(2) AIA), i.e., supplying the system "for first use directly to the user or for own use" (Art. 3(11) AIA) [16] (p. 16).
This risk is magnified owing to the fact that open-source software (OSS) is an important part of the research ecosystem. Classifying any release of an OSS as "putting into the market/into service" and imposing conformity assessment requirements on it will simply be detrimental to the entire scientific research ecosystem.

Territorial Scope
We welcome the broad territorial scope aiming to prevent the migration of innovation to non-European countries. However, a clarification of the term "located" referenced in Art. 2(1)(b) AIA is of utmost importance. From our point of view, "located" refers to-or at least should refer to-the user's or provider's location and not the location of the AI system [13] (p. 278) because such an interpretation would make it increasingly difficult to determine the specific location of the AI system, especially with the increase in the use of cloud computing technologies. Moreover, relocations to third countries would be very easy. Although recital (11) AIA clearly supports the position taken here, a clarification in the binding part of the regulation would be preferable.
Furthermore, limiting the geographical scope of the AIA to only the "use" of AI systems within the European Union (EU) (Art. 2(1)(c) AIA) excludes cases where the development, sale or export of high-risk AI systems or even prohibited AI systems takes place within the EU but are used outside of the EU. This provision creates serious legal and ethical concerns for the users of AI systems located outside of the EU. However, taking into account that the AIA is based on the internal market clause (Art. 114 TFEU), the provision seems justified, because it is difficult to imagine how the AIA could contribute to the internal market if an AI system is only developed in the EU, but never put into operation there.

Prohibited Uses of AI (Art. 5 AIA)
The AIA follows the right approach to prohibit certain especially intrusive forms of AI. However, there are still many loopholes that the legislators may need to plug. Art 5(1)(a) and (b) AIA contain two prohibited practices that claim to regulate manipulation. However, neither the said article nor recital (16) AIA defines what constitutes a "subliminal technique" or what is "materially distorting" a person's behavior. Many user experience (UX) features influence a user's behavior in subtle ways that may go unnoticed by the user. Art. 5(1)(a) AIA covers only AI practices which directly intend to harm other people. It seems likely that most of these kinds of systems are already illegal under criminal law in most of the Member States. Influencing and nudging users into commercial services they potentially do not want is not covered by Art. 5 AIA. Admittedly, a general ban of these kinds of advertisement techniques would be overreaching. Moreover, the Unfair Commercial Practices Directive (UCPD) [17] prohibits commercial practices which distort human behavior under certain conditions (as to the question whether so-called "dark patterns" and algorithmic nudging/manipulation is prohibited by the UCPD: [18] (paras. 28,29), [19] (pp. 109-135) [20,21]). Therefore, clarifying the terms of recital (16) or Art. 3 AIA would be useful to ascertain the scope of AI systems using such techniques in light of Art. 5 AIA. The AIA should take into special consideration "collective harms" emanating from the use of manipulative AI systems since the same cannot be effectively identified, proved, or resolved by way of private law remedies [22] (pp. [4][5].
Under Art. 5 AIA, it is sufficient that the distorted behavior of a person is likely to cause physical or psychological harm, but not that the AI system is likely to manipulate behavior. Even when the prohibitions under Art. 5 AIA are not applicable to such manipulative AI systems, as, for example, the AI system is manipulative but lacks causality for causing harm, the system is not per se covered as a high-risk system under the AI.
The AIA did not follow the demands for a general prohibition of deep fakes, which could be defined as manipulative AI systems addressed in Art. 5(1) AIA, but usually do not raise a direct risk of physical or psychological harm rather than individual defamation or reputation damage and systemic problems of fake news and manipulation. Deep fakes are explicitly mentioned in Art. 52(3) AIA and require the disclosure that the content is artificially generated or manipulated.

Social Scoring (Art. 5(1)(c) AIA)
The ban on AI systems used for social scoring purposes is limited in Art. 5(1)(c) AIA to those deployed by public authorities. It is indefinite when a system is "leading" to a specific outcome, and this allows the possibility that the authority using the social scoring system may claim that the scoring was not a determinative factor [22] (p. 7). Moreover, Art. 5 AIA does not clarify to what extent "the use of an AI system" takes place on behalf of public authorities, especially given that the development of digital technologies is mostly performed by the private sector and deployed by public administration or other public authorities. Whether the citizen scoring, commonly built with broad private sector datasets augmenting administrative data, falls within the scope of Art. 5, remains unclear [22] (p. 6). Therefore, the AIA should clarify the meaning of the phrase "on their behalf" as contained in Art. 5(1)(c) AIA accordingly.
By limiting the ban of social scoring to public authorities, the AIA ignores the use of such systems by private entities, even in high-risk areas which could likely affect the fundamental rights of people via an indirect third-party effect. Accordingly, the Commission should re-evaluate whether the risk of broadly cross-context implemented private social scoring applications, within sensitive areas of public concern like credit scoring, housing or public employment, are properly addressed by the AIA. In addition, limiting the ban of social scoring to situations where the social score leads to treatment of persons that is "disproportionate to their social behavior" creates legal uncertainty [23] (p. 365).

Biometric AI Systems (Art. 5(1)(d) AIA)
The approach in Art. 5(1)(d) AIA to prohibit only biometric identification systems (BIS) used for law enforcement is too narrow. Especially private companies pursue their own interests and are not bound by the common good. This means it is even more imperative to prohibit practices which pursue commercial, private interests of companies. Therefore, the prohibition should be applicable to all uses of biometric AI systems-public or private and hence, should be extended to all AI systems used by other public authorities and by private actors.
Moreover, the exceptions for BIS are too broad since the proposal establishes exceptions not only for the time-sensitive search for crime victims and prevention of terrorist attacks, but also for the detection of criminal suspects concerned by a custodial sentence of three years and above referred to in Council Framework Decision 2002/584/JHA [24]. These exceptions will be regularly met in the practical work of law enforcement.
All in all, there should be a general ban on BIS in publicly available spaces which should be extended to all kinds of biometric, behavioral, or emotional signs in any context, e.g., DNA, movement analysis, and post remote biometric identification, as the same could enable indiscriminate mass surveillance. Already the anticipation of being identified in public accessible places can cause chilling effects on exercising fundamental rights [25] (pp. 125-126), [26].
It is also interesting to note that Art. 5(1)(d) AIA prohibits the use of BIS, which is different from "biometric categorization systems" (BCS). In contrast to the definition contained in Art 3(35) AIA, some categories may also be much more sophisticated, such as relating to the concrete regional background, a particular risk group, or certain personality traits, in which case biometric categorization is usually combined with detection techniques [27] (p. 56). While a BIS identifies natural persons on the basis of their biometric data, a BCS is able to assign a natural person to specific categories, such as sex, age, ethnic origin, or sexual or political orientation [28] (p. 288). This difference is acknowledged in Annex III of AIA which explicitly uses the term "biometric identification and categorization". Moreover, an emotion recognition system (ERS) tries to detect different emotions through the integration of information from facial expressions, body movement, gestures, and speech. However, the automated recognition of characteristics like gender, sexuality, or ethnicity as well as emotions caused by BCS and ERS is not prohibited under the AIA. All these features were repeatedly the cause for biased automated decisions in the past. AI systems like BCS and ERS which are categorizing individuals from biometrics and emotion detection into clusters according to grounds for discrimination prohibited under Art. 21 Charter of Fundamental Rights of the European Union should be prohibited under Art. 5 AIA [16] (pp. 25-28) (see also [29] (pp. 56-59)).

Possibility of the European Commission to Add Prohibited AI Practices?
Whereas Art. 7 AIA gives the European Commission the possibility to amend the list of stand-alone high-risk AI systems, the AIA does not confer the same power when it comes to prohibited practices. According to the proposal, Art. 5 AIA cannot be amended by the European Commission. Since some problematic features of AI practices can only be determined ex post, the legislators could consider allowing the European Commission to have the option to amend the prohibited practices under Art. 5 AIA. However, given the far-reaching consequences of total bans, such a delegation of power would go too far. Therefore, in our view, it is justified that the power to adopt delegated acts is conferred on the European Commission only with regard to high-risk AI systems.

Classification of High-Risk AI Systems
While we appreciate the risk-based approach adopted by the European Commission, we strongly believe that a more detailed classification of risk could be necessary for the industry to perform the self-assessment of risks associated with their products. Currently, the AIA prescribes a broad definition for "AI" and generalizes the risk levels for various AI applications, which may not identify specific risks of AI systems, leading to insufficient compliance. A more granular classification of risk will also be conducive in ensuring the safety of high-risk AI systems developed outside the EU. Therefore, the AIA should reflect flexibility and adaptability in its risk classification and should allow for modifications to the areas listed in Annex III and not just specific uses within the existing categories, along with the possibility to add entire new categories of risk, including ones that may not be currently contemplated. In doing so, the European Commission should initiate risk assessments following transparent mechanisms in order to enable companies to anticipate possible expansions as early as possible. During this process of revising the list of high-risk AI systems, the public should also be provided with consultation and participation rights. It is also observed that for the classification of risks to be pari materia with the continuous innovation, the risk assessment should be based on continuously updated standards, based on ongoing research.
In particular, the AIA omits several relevant contexts of use of high-risk AI systems which can pose significant risks, such as AI systems used for determining the insurance premium, health-related AI systems that are not already covered under Annex II, AI systems deployed for housing purposes, to name a few [30]. It is also questionable as to why Annex III.6. narrowly focuses on the use of AI systems by law enforcement authorities regarding "individual risk assessments for natural persons", detection of "the emotional state of a natural person," and "profiling of natural persons". In doing so, the AIA limits its scope to only individual natural persons, excluding AI practices which are directed at groups or areas, such as for example "predictive policing" which uses geospatial data to determine future crimes. Since these systems may contribute to over-policing and systemic discrimination, we are of the opinion that predictive policing should also be included under Annex III.6.
A problem of duplication of conformity assessment may arise in case of AI systems that are products or safety components covered by the New Legislative Framework (NLF) under Annex II. Such AI systems may undergo conformity assessment once under the NLF (Annex II) and the second time when integrated into a machine, according to Annex III. Therefore, there is a need for clear provisions that avoid duplicate tests for such AI systems which would otherwise cause inconsistency in the legal framework with contradictory assessments, accompanied with additional expenditure for companies.

High-Risk AI Systems Already Placed on the Market
Furthermore, we observe that under the AIA, high-risk AI systems are mandated to undergo a new conformity assessment procedure whenever they are "substantially modified" (Art 43(4) AIA), whereby recital (66) AIA creates a low threshold for any such "substantial modification". In contrast, the proposal creates a significantly higher threshold for high-risk AI systems already placed on the market or put into service. According to Art. 83(2) AIA, the Regulation applies to these systems only if they are "subject to significant changes in their design or intended purpose". By focusing only on the design and purpose, the proposal fails to take into account all external security risks posed to the AI systems from the purview of a "significant change", thereby creating a higher threshold for any changes or modifications in the AI systems [31] (p. 13).

Standardization as Non-Justified Delegation of Rule-Making Power
Title III, Chapter 2 AIA contains an extensive list of essential requirements which must be observed before a high-risk AI system is put on the market. In line with the NLF, all of these requirements are worded in a rather broad way. Instead of formulating the requirements for high-risk AI systems itself, the AIA defines only the essential requirements, whereas the details are left to standards elaborated by the European Standardization Organizations (ESOs). Standardization has many positive aspects. Standards can promote the rapid transfer of technologies, ensure the interoperability of systems, and pave the way to establish uniform requirements that support the implementation of legal requirements and ethical values [32] (pp. [3][4]. At the same time, however, standards create concerns surrounding excessive delegation of regulatory power to private ESOs. Such a delegation of power is problematic, above all, due to the lack of democratic oversight, the missing possibilities of relevant stakeholders (civil society organizations, consumer associations) to influence the development of standards, and the lack of judicial means to control them once they have been adopted. The standardization of AI systems is not a matter of purely technical decisions. Rather, a series of legal and ethical decisions must be made, which should not be outsourced to private ESOs, but which require a political debate involving society as a whole. The European standardization process must reflect European values and fundamental rights, including consumer protection by granting European stakeholder organizations effective participation rights [33].
Therefore, the European Commission should reconsider its approach. Instead of delegating fundamental decisions to ESOs, the AIA should establish legally binding provisions for the essential requirements of high-risk AI systems, including questions relating to but not limited to which forms of algorithmic discriminations should be prohibited; how algorithmic biases should be mitigated; what type and degree of transparency AI systems should have; how AI-driven decisions and predictions should be explained in a comprehensible way. These legally binding obligations could then, in turn, be further specified by harmonized standards for specific applications by the ESOs. Since such harmonized standards can still have far-reaching social and legal consequences, European policymakers should at the same time take the necessary steps to improve the overall process of standardization, including the structural and organizational framework of ESOs like CEN and CENELEC to facilitate an inclusive standardization system which provides for democratic input on the development of technical standards [33].

Ex Ante Conformity Assessment
We welcome that AI systems posing a high-risk must be subject to a prior conformity assessment before they can be placed on the market or otherwise put into operation in the EU.
What is concerning, however, is that the so-called "stand-alone" AI systems (Annex III) only need an internal control according to Art. 43(2) AIA. The involvement of a notified body according to Art. 43(1) AIA is only necessary for a very limited number of AI systems, e.g., certain biometric systems. With this approach, the proposal gives AI providers an unduly broad margin of discretion regarding the application and enforcement of the AIA, especially regarding the questions: (i) Whether the used software is an AI system; (ii) whether the system may likely cause harm; and (iii) how to comply with the mandatory requirements of Title III, Chapter 2 AIA, which are not laid down in detail in the proposal. Taking into account that the CE marking system failed in other sectors to protect human health and safety-such as in the medical sector, where 400.000 defective breast implants where certified by TÜV Rheinland as being safe [34] (pp. 13-27)-the European Commission should explore whether at least certain high-risk AI systems should be subject to an independent ex ante control.

Essential Requirements for High-Risk AI Systems
We appreciate that the AIA intends to introduce mandatory requirements for high-risk AI systems. Yet, as already mentioned under Section 5.3., most of the requirements are formulated in a rather broad way, giving too much discretion to AI providers. In addition, some of the mandatory requirements are not convincing for the following reasons.

Data and Data Governance (Art. 10 AIA)
Regarding data and data governance, Art. 10 AIA refers mainly to training, validation, and testing data sets. With the limitation to training, validation, and testing data sets, the AIA disregards other stages of ML that should also be subject to data quality criteria and data governance practices (We notice that Art. 10 AIA can also have an impact on data licenses, which are increasingly relevant in practice, and which have as their object access to data (usually for remuneration). Consequently, the "data quality" can be a reference point for contractual conformity: [35]).
On the other hand, the requirement in Art. 10(3) AIA regarding the training, validation, and testing data sets to be, inter alia, free of errors, is quite impossible to meet [13] (p. 280). Such a level of perfection is technically not feasible and might also hamper innovation. Consequently, the AIA should rather require that providers take appropriate measures to ensure that data sets are free of errors.
In addition thereto, several privacy enhancing techniques such as differential privacy [36] intentionally introduce noise into datasets in order to prevent the unintentional disclosure of sensitive data. This renders the data not free of "errors," although it helps in protecting the sensitive data which is also in consonance with the principle of data protection by default as provided under Art. 25 GDPR. Accordingly, Art. 10 AIA should be amended to allow the use of privacy-enhancing techniques in data governance practices of high-risk AI systems.
While mandating that the training, validation, and testing data sets should be free of errors and complete, the AIA fails to define the criteria for measuring the quality of data sets (e.g., predictive accuracy, robustness, fairness of trained machine learning models).
Although the AIA requires AI providers to use data governance practices that shall concern in particular "examination in view of possible biases" (Art. 10(2)(f) AIA), the proposal fails to clarify the notion of "bias." This is problematic for various reasons. First, there is no common understanding or generally accepted definition of "bias" [37]. Second, recital (44) AIA states that AI systems should not become "the source of discrimination prohibited by Union law". However, the AIA does not indicate what forms of biases are prohibited under the existing framework and how algorithmic bias should be mitigated. Having said that, it is argued that existing EU non-discrimination law "displays a number of inconsistencies, ambiguities, and shortcomings that limit its ability to capture algorithmic discrimination in its various forms" [38]. What is also concerning is that there are different notions of fairness which are incompatible with each other (e.g., individual vs. group fairness) [39]. This leads to the problem of deciding which fairness standards are desirable. All in all, an unclear understanding of "bias" will only cause confusion among AI providers as they would not be able to apply appropriate measures to prevent or minimize bias or discrimination.

Transparency and Provision of Information to Users (Art. 13 AIA)
The requirements for transparency of high-risk AI systems laid down in Art. 13 AIA are certainly a step in the right direction. However, it is rather problematic that this norm only formulates general requirements without specifying them [40]. Art. 13 AIA focuses on the interpretability of AI systems' output by its users without clarifying the concept of interpretability and its correlation with explainability. While all policy documents preceding the AIA focused on explainability, the AIA refers only to interpretability (as to the various terms cf. [40]). Thus, the AIA is silent on the specific measures that need to be taken to ensure that AI systems are sufficiently transparent. Instead, this issue is largely left to the self-assessment of the provider, who must ensure that the system undergoes an appropriate conformity assessment procedure before it is placed on the market or put into service (Art. 16(a) and (e) AIA). While it is true that, according to Art. 16(j) AIA and Art. 23 AIA, providers must demonstrate, upon request of a national competent authority, that their AI system complies with the transparency requirements set out in Art. 13 AIA, the question of how to make AI systems interpretable is left to the discretion of the AI system provider.
Another problem is that under the AIA, neither providers nor users are required to provide transparency to the person affected by an AI-based prediction or decision. According to Art. 13 AIA, the provider must ensure transparency only vis-à-vis the user of the system. In this regard, we recommend that the AIA should be amended to include an obligation to explain AI-based predictions or decisions to the affected persons in order to safeguard the rights and freedoms of individuals.

Human Oversight (Art. 14 AIA)
In our opinion, the provision for human oversight (Art. 14 AIA) is rife with many impracticalities. First, the requirement for a human to fully understand the capacities and limitations of a high-risk AI system (Art. 14(4)(a) AIA) is currently not feasible for some AI systems and could, accordingly, lead to an indirect ban of such systems. If this is the intention of the proposal, the text of the AIA should at least point out that AI providers must not use opaque high-risk AI systems.
Second, Art. 14 AIA nowhere specifies as to when, how, and at what stage human oversight is required. Strangely enough, it is also silent on any obligation of the users to ensure human oversight. A comprehensive human oversight must entail supervision of the AI system during its entire lifecycle by ensuring mandatory transparency measures about the use of AI applications, including public statistics on deployments, aims, scope, and developers.
The measures stipulated to ensure human oversight under Art. 14(3) AIA and the corresponding provision Art. 14(4) AIA are also silent on aspects such as non-discrimination and fairness. In that regard, it would be prudent to involve National Centers of Expertise on AI with strong engagement and involvement with civil society, equality bodies and human rights to provide qualitative inputs to the measures stipulated to ensure human oversight under Art. 14(3) AIA and propose guidelines to assess bias in AI systems.

EU Database (Art. 60 and Annex VIII AIA)
The AIA proposes a new central database, managed by the European Commission, for the registration of "stand-alone" high-risk AI systems. While the purpose of an EU database is to facilitate societal scrutiny for greater transparency, we raise concern over why the same should be only limited to stand-alone high-risk AI systems. The database should also encompass AI systems used by public authorities irrespective of their assigned risk level, as well as those used by private entities whenever their use has a significant impact on an individual, a specific group, or society at large.
Moreover, the information required by Annex VIII does not seem to be sufficiently comprehensive, as the data listed under the said Annex omits a number of substantial information such as the intended purpose of the system, explanation of the model or type of ML involved, actors involved in developing and deploying the system, results of any algorithmic impact assessment/human rights impact assessment carried out.

Misaligned Obligations
Since many AI systems operate on processing of large-scale data, there is an inherent need for it to be aligned with the existing legal framework. In this context, it seems conspicuous that the AIA requires the providers (developers) of an AI system to perform a risk assessment, whereas under the GDPR, the user (as defined under the AIA) acts as the "data controller" who takes on the task of risk assessment. This shift of roles has the effect of removing "users" from any responsibility for risk assessment under the AIA. Consequently, the AIA should provide for an obligation of high-risk AI system users to carry out an AI impact assessment similar to the one carried out under Art. 35 GDPR, Art. 39 EUDPR [41] or under Art. 27 LED [31] (p. 9), [42].
Moreover, the concept of user needs to be clarified in multi-stakeholder environments, where more than one actor is "using" the system. This is the case, for example, in the healthcare context, where a multitude of actors is involved (healthcare organization, nurses, physicians, etc.,). Since the AIA does not clarify who should be regarded as a "user" in situations where more than one person has the authority to decide about how to use the AI system, there is the risk of improper compliance with the obligations of the AIA [43] (p. 12).

General-Use AI Systems
In situations where AI systems can be used for many different purposes (general-use AI systems), there may be circumstances where such an AI technology gets integrated into a high-risk system, without the provider having any or only limited influence over the compliance obligations of high-risk AI systems. This aspect assumes major relevance since Art. 3(2) AIA presumes that the provider or developer of an AI system is also the one deploying it in a high-risk application. Therefore, we strongly suggest that the European Commission clarifies the responsibilities of providers of general-use AI systems.

AI as a Service (AIaaS)
Moreover, the AIA should clarify and delineate the responsibilities of providers and users with regard to AIaaS. Providers of AIaaS may develop either off-the-shelf software solutions or industry specific customized software. This means that their customers often have no insight into the technology. Still, it is unclear whether customers of AIaaS qualify as "providers" or "users". If customers are regarded as "providers", they might not be in the position to satisfy the mandatory requirements set out in Title III, Chapter 2, due to their lack of knowledge. Accordingly, it is important that the AIA maps the responsibilities of providers and customers of AIaaS.

Post-Market Monitoring by Providers
We particularly welcome the provision allowing for the post marketing plan to be at the discretion of the providers and users (Art. 61 AIA), facilitated by provisions obliging providers to maintain logs automatically generated by their high-risk AI systems by virtue of a contractual arrangement with the user or otherwise by law (Art. 20 AIA). With that said, the provision obliging the provider to evaluate the continuous compliance of AI systems (Art. 61(2) AIA) appears to be counter-intuitive since complex ML techniques render continuous evaluation to become an uphill task. For this reason, we believe that owing to the complexities posed by software systems as opposed to hardware components, where detection of any anomaly is easier, the short window of 15 days provided under Art. 62(1) AIA for reporting any serious incidents and/or malfunctions to the market surveillance authorities of the Member States would be unreasonable. Therefore, we suggest that the legislators should extend the short deadline of 15 days.
Furthermore, we are of the opinion that requiring full access to training, validation, and testing datasets to the market surveillance authorities (Art. 64(1) AIA) should be omitted, as it is a highly unworkable provision. The legislators need to acknowledge that some AI systems or legacy software could be built using datasets which do not exist anymore or do not use centralized datasets. For example, Federated Learning (FL) is a technique used by AI developers to train ML models without centralized data collection. FL leaves the data where it is, distributed across numerous devices and servers on the edge. This means that it may not always be possible for these AI systems to demonstrate compliance with dataset requirements under Art. 10 AIA, generate centralized logs as outlined in Art. 12 AIA, or provide direct access to datasets per Art. 64 AIA.
Furthermore, the requirement to grant access should take into account relevant EU legislations such as the GDPR imposing data retention obligation following the data minimization principle, strict data transfer mechanisms, safeguards for security of personal data. The obligation to share source code of high-risk AI systems upon a reasoned request to the market surveillance authorities (Art. 64(2) AIA) is further problematic as source code may be protected by the EU Trade Secrets Directive [44] and the Computer Programs Directive [45]. The confidentiality provision of Art. 70 AIA seems insufficient to protect providers' intellectual property rights. Instead of stipulating full access and disclosure of data and documents, the AIA should be modified to enable the market surveillance authorities to carry out robust testing.

Requirements for AI Systems with Limited Risks (Art. 52 AIA)
Art. 52 AIA refers to transparency in the context of human awareness of communication with AI systems. However, while doing so, it introduces another category of subjects involved in an AI life cycle, namely "natural persons" that interact with AI systems. This causes confusion with the other terms referred to in the AIA such as "user". Thus, it would be appropriate to clarify in the AIA if these categories of subjects are any different from each other.

Lack of Effective Enforcement Structures
As mentioned under Sections 5.3 and 5.4, the AIA fails to establish effective enforcement structures, because it relies primarily on an internal self-assessment by AI providers for high-risk AI systems, combined with harmonized standards to be developed by private ESOs. Although Member States are empowered to implement rules on sanctions, and market surveillance authorities can require that a non-compliant AI system be withdrawn or recalled from the market, this may not be sufficient to ensure effective enforcement. The experience with the GDPR shows that overreliance on enforcement by national authorities leads to very different levels of protection across the EU due to different resources of authorities, but also due to different views as to when and how (often) to take actions [46]. Moreover, Member State authorities lack the substantive standards to assess whether providers of high-risk AI systems are in breach of the AIA. Since the proposal does not set any precise requirements for high-risk AI systems, but leaves this to the self-assessment of the providers (as well as the standardization organizations), there are no (uniform) standards according to which the AIA could be enforced.
Moreover, individuals affected by AI systems, civil rights organizations, or other interested parties have no right to complain to market surveillance authorities or to sue a provider or user for failures under the AIA. Hence, the proposal also lacks effective means of private enforcement to compensate for the weak public enforcement.

Competences of the European AI Board (EAIB)
The enforcement deficit described above would also not be overcome by way of the newly created "European Artificial Intelligence Board" (EAIB). Although the EAIB is aimed to facilitate the harmonized implementation of the AIA, the proposal does not confer any powers to the EAIB regarding the enforcement. Instead, the EAIB's main purpose would be to issue opinions and recommendations on the implementation of the AIA, especially on standards and common specifications (Art. 58 AIA).
Further, the ambition of the European Commission to facilitate consistent and harmonized application of the AIA may seem dubious due to certain reasons. First, the functioning of the EAIB does not seem to be entirely independent from any political influence, insofar as the AIA has allotted a predominant role to the European Commission, which not only chairs the EAIB, but also has a right of veto for the adoption of the EAIB rules of procedure [31] (p. 15). Second, the mechanism for cooperation between either the national supervisory authorities and the individuals affected by the AIA; or among different national supervisory authorities, remains unspecified under the AIA. Therefore, there is an apparent need to revise the provisions of the AIA in order to ensure greater autonomy to the EAIB.

New Legislative Framework
The European Commission intends to incorporate the AIA into the NLF of product safety requirements as a horizontal standard. The AIA is to contain specific requirements for AI systems, while the end product may well be subject to further requirements under the NLF. In the proposal, this only follows from an argumentum a contrario to Art. 2(2) AIA, which excludes AI systems that fall into the scope of the old legislative framework as safety components of products or systems, or as products or systems. Against this backdrop, we welcome that the Commission intends to adopt the AIA together with the proposed Machinery Products Regulation [47].

Civil Liability
As is common in product safety law, the AIA does not contain any provisions on civil liability for damages caused by AI systems. The proposal also lacks integration into European product liability law, currently regulated by the Product Liability Directive 1985/374 [48]. This approach is comprehensible, since the EU plans to address liability issues related to new technologies, including AI systems, in Q4 2021-Q1 2022 [49]. However, we would like to point out that the AIA should at least contain a clarification that it does not affect claims for damages by persons who have been harmed by a breach of the AIA.
Otherwise, it would remain unclear whether affected persons have a claim for damages under national law [50].

Data Protection Law
EU data protection law is basically not affected by the AIA. However, such a statement is missing in the draft; only occasional references are made to the applicability of certain relevant provisions of the GDPR or the LED. Other provisions contain specific exceptions to the provisions of the GDPR, such as Art. 10(5) or Art. 54 AIA. The provisions on data governance apply alongside the GDPR and regardless of whether the data is personal or not. From our point of view and contrary to the EDPB-EDPS, Joint Opinion 5/2021, Art. 10(5) AIA seems clear enough to create a legal basis for the processing of special categories of data.
Consequently, the GDPR and the AIA follow different approaches. While the GDPR addresses primarily the protection of individual rights, embedding the values of European fundamental rights; the AIA targets product control and systemic risks without directly establishing any individual rights.
Nevertheless, there should be an operative clause explicitly mentioning that the use of AI systems should comply with data protection laws and not just mere recitals clarifying this point. Rather, such a requirement for compliance with data protection laws should be incorporated in Chapter 2 of Title III of the AIA. This requirement must also entail creating provisions for taking into consideration the codes of conduct and certification as provided under Chapter 4 of GDPR under Chapter 2 of Title III of the AIA, as in the current form, the CE marking issued by notified bodies under the AIA simply disregards the GDPR.
Furthermore, data protection authorities (DPA) are already enforcing the GDPR, the EUDPR, and the LED on AI systems involving the use of personal data in the Member States, thus, it would be appropriate if the DPAs were also designated as the national supervisory authority pursuant to Art. 59 AIA owing to the major intersection of the scope of their work under both AIA and data protection laws in the EU [31] (p. 14).

Proposal for a Digital Services Act (DSA)
The AIA also does not clarify how it relates to the proposed Digital Services Act (DSA) [51] which is intended to update the EU's horizontal rules on online providers, amending at the same time, the E-Commerce Directive 2000/31. The explanatory memorandum of the AIA [14] only briefly states that the two legal acts are consistent with each other. Since online platforms often use AI systems to optimize their services [52] (paras. 16,17), it should be made clear, whether the obligations for high-risk systems governed by the AIA apply to platforms in addition to those arising under the DSA. There are strong arguments that the obligations of both EU regulations must be observed cumulatively, but in case of doubt, the liability privilege for platforms will prevail. Nevertheless, a statement from the EU legislator would be important here for reasons of legal certainty.

Regulatory Gaps of the AIA
Despite the innovative risk-based structure, the AIA appears to have some remarkable gaps.

Lack of Individual Rights
The AIA-and this is arguably one of the most crucial points-does not provide for any individual rights. Although the regulation is intended to protect fundamental rights, it lacks remedies by which individuals can seek redress for a breach of the regulation. In particular, the draft does not foresee any mechanism to facilitate individuals' recourse against AI-driven decision-making. This may be in line with the product safety regulatory approach, but it may fundamentally challenge that approach. In any case, individual rights should be well coordinated with existing entitlements.

Other Gaps
The AIA aims to implement the legal policy goals in consonance with the European values, for example the prohibition of social scoring in Art. 5(2) AIA. However, a European approach toward AI should respect not only human rights and ethical values but also other goals such as climate change and sustainability. In this regard, the AIA does not directly mention "Green AI" or "Sustainable AI" as a concrete goal of a European understanding of futureproof and sustainable AI development, which are also important topics of the European Green Deal [53].
The proposal acknowledges that AI can support socially and environmentally beneficial outcomes and that such action is needed in the high-impact sector of climate change. Recital (28) AIA mentions that the fundamental right to a high level of environmental protection should be considered when assessing the potential harm of AI systems. Nevertheless, there are only scattered provisions addressing environmental protection, e.g., under Art. 62 AIA, if providers of high-risk AI systems report any serious incident, which includes the serious damage to the environment, Art. 3(44)(a) AIA.

Conclusions
All in all, despite of some flaws and needed clarification, the European Commission's Proposal for an Artificial Intelligence Act has many innovative elements and is generally well designed. The risk-based regulatory approach is not only appropriate but the most practical. Nevertheless, one main aspect in need of improvement is the definition of the term "AI." The AIA employs a rather broad definition and thus creates the risk of overregulating systems, which typically do not pose any danger resulting from ML and AI techniques. The second major critique concerns the proposed (self-)enforcement structure, which mainly relies on internal controls conducted by the AI provider. In most cases, the AIA requires no external oversight. Additionally, the provisions regarding the conformity standards and criteria often appear to be fragmentary and imprecise. In contrast to the imminent overregulation attributable to the broad AI definition, the self-enforcement approach raises concerns of underregulation.
Regardless, we strongly believe that the AIA is the appropriate measure to create a safe and reliable regulatory environment for AI in Europe and, thus, is a step in the right direction.