You are currently viewing a new version of our website. To view the old version click .
Laws
  • Article
  • Open Access

16 February 2024

Resh(AI)ping Good Administration: Addressing the Mass Effects of Public Sector Digitalisation

Law School, Faculty of Arts, Law and Social Sciences, University of Bristol, Clifton Campus, Bristol BS8 1RJ, UK
This article belongs to the Special Issue Law and Emerging Technologies

Abstract

Public sector digitalisation is transforming public governance at an accelerating rate. Digitalisation is outpacing the evolution of the legal framework. Despite several strands of international efforts to adjust good administration guarantees to new modes of digital public governance, progress has so far been slow and tepid. The increasing automation of decision-making processes puts significant pressure on traditional good administration guarantees, jeopardises individual due process rights, and risks eroding public trust. Automated decision-making has, so far, attracted the bulk of scholarly attention, especially in the European context. However, most analyses seek to reconcile existing duties towards individuals under the right to good administration with the challenges arising from digitalisation. Taking a critical and technology-centred doctrinal approach to developments under the law of the European Union and the Council of Europe, this paper goes beyond current debates to challenge the sufficiency of existing good administration duties. By stressing the mass effects that can derive from automated decision-making by the public sector, the paper advances the need to adapt good administration guarantees to a collective dimension through an extension and a broadening of the public sector’s good administration duties: that is, through an extended ex ante control of organisational risk-taking, and a broader ex post duty of automated redress. These legal modifications should be urgently implemented.

1. Introduction

Much like in every other area of socio-economic activity, the “COVID-19 digital shift” and the mainstreaming of advances in artificial intelligence (AI) have prompted discussion of how the public sector could harness the advantages of digital technologies and data-driven insights. AI brings the abstract promise of a more efficient, adaptable, personalisable, and fairer public administration (Esko and Koulu 2023; Coglianese and Lai 2022; Sunstein 2022). Around the world, States are thus experimenting with AI technology, seeking more streamlined and efficient digital government and public services (OECD.AI 2023; Joint Research Centre AI Watch 2022)—in no small part as a driver for rationalisation or savings-generation in the organisation of their public administrations. The adoption of data-driven approaches, digital technologies, and AI to support or automate decision-making in the public sector is quickly transforming public governance (Yeung 2022; Dunleavy and Margetts 2023).
Such “digital transformation” poses significant risks that require new regulatory approaches (Kaminski 2023). Generative AI, for example, has been shown to create unreliability, misuse, and systemic risks (Maham and Küspert 2023)—which are particularly acute in public sector automated decision-making (ADM) supported by AI (Finck 2020; Kuziemski and Misuraca 2020). The accelerating shift towards new modes of digital public governance, therefore, requires an adaptation of the legal framework—and there is broad support for this view (see, e.g., Curtis et al. (2023)). There are signs of a growing (soft) international consensus on the need to regulate public sector AI adoption as part of broader rules on AI use (AI Safety Summit 2023; Ministry of Foreign Affairs of Japan 2023), with the United States of America recently taking perhaps the most decided approach to date.1 However, progress has generally been slow and tepid so far, particularly in the context of the European Union (EU) and the Council of Europe (CoE), on which this paper will focus.
Pushing for such legal adaptations, much academic work has recently emerged on the need to adjust the regulation of ADM to protect the individual rights of those at the receiving end of these new modes of delivery of administrative (in)justice (Demková et al. 2023). In the European context, the emerging consensus is that the current legal framework is ineffective in tackling some (or most) of these risks, where, e.g., the technology pushes the limits of the General Data Protection Regulation (GDPR)2 or even those of new instruments of EU digital law, including the (at the time of writing, on 19 December 2023) yet to be finalised EU AI Act3—on which there is a burgeoning literature, see, e.g. (Demková 2023a, 2023b; Fink and Finck 2022; Gentile 2023; Cutts 2023). The legal framework is also seen as ineffective in preventing discrimination on grounds not (directly) linked to currently protected characteristics (Wachter 2022), thus leaving a gap in relation to new forms of algorithmic discrimination. The continued preservation of individual rights will thus require adjustments in the current legal framework (Laukyte 2022; Chevalier and Menéndez Sebastián 2022) and will eventually reshape individual rights under current approaches to good administration (Zerilli 2023). This will be part of the “digital transformation” of administrative law, but developments in individual rights cannot provide a full picture. Many other “traditional” administrative law doctrines will require careful reconsideration, and new doctrines and rules may be needed (Bello y Villarino 2023). The effectiveness of all changes and adaptations will, of course, hinge on their understanding and interpretation by practitioners (Røhl 2022).
Crucially, new modes of digital public governance not only jeopardise individual rights but also threaten collective rights and interests in the proper functioning of the public sector as a crucial driver of the legitimacy of administrative action (Smuha 2021; Ranchordás 2022; Coglianese 2023; Kouroutakis 2023; Carney 2023). It has been stressed that there is a need to rethink administrative procedural fairness and to move beyond current individualistic approaches (Meers et al. 2023; Tomlinson et al. 2023). In a similar attempt to go beyond the “individual unit” in reshaping good administration guarantees and taking a critical and technology-centred doctrinal approach to developments under EU and CoE law (Section 2), this paper goes beyond current debates on the regulation of ADM to challenge the sufficiency of existing good administration duties. By stressing the mass effects that can derive from ADM in the public sector, whether as a result of AI adoption or based on the use of less sophisticated algorithms and forms of automation (Section 3), the paper advocates for the need to expand good administration guarantees to a collective dimension through an extension and a broadening of the public sector’s good administration duties: an extended ex ante control of organisational risk-taking (Section 4), and a broader ex post duty of automated redress (Section 5). The paper concludes with a reflection on the urgency of implementing the proposed legal reforms (Section 6). Although the paper focuses on the European context, given that some of the main issues identified in the analysis arise from the individualistic logic followed in the design of good administration guarantees that are common to OECD jurisdictions,4 it is of relevance beyond Europe.

3. Mass Effects of the Digitalisation of Public Sector Decision-Making as the Crucial Challenge

It is increasingly accepted that regulating AI use by the public sector, and more generally, requires a precautionary or anticipatory approach (Kaminski 2023). At least in part, this stems (or should stem) from the realisation that AI deployment can generate mass effects that are very difficult or simply impossible to correct for after the fact. Experience has already shown that the implementation of defective or discriminatory algorithms by the public sector can generate massive harm thwarting the lives and opportunities of very many citizens—and oftentimes the most vulnerable and marginalised (Sinclair 2023). This has become painfully obvious under the light of scandals such as the Robodebt scheme implemented in Australia (Royal Commission into the Robodebt Scheme 2023), the UK’s Post Office scandal involving the Horizon software (Marshall 2022), or, in the also scandalous deployment of the digital welfare fraud detection system (System Risk Indication, SyRI) in the Netherlands (Fenger and Simonse 2024). These cases show how highly automated and data-driven screening mechanisms deployed at the population level generate extremely harmful levels of mass effects and how difficult it is for individuals to obtain adequate redress and compensation. The standard approach to the enforcement of human and fundamental rights, including the right to good administration through ex post individual claims, is bound to fail in the digital context. Furthermore, there are additional ways in which AI can erode the individualisation of decision-making. AI systems might handle cases in batches rather than giving them individual consideration, or self-learning processes might ensure that future decisions are influenced by past ones (Binns 2021). This lack of individual consideration might be problematic, even if the massified outcomes are not systematically harmful, and there may be further (unobservable) breaches of existing guarantees under good administration duties.
The current (analog) regime developed in a context of “human-exclusive” decision-making that, by definition, is (severely) constrained by limitations in the amount of information that can be processed and by the speed with which decisions can be made, communicated, and executed. This context has provided the implicit paradigm for the conceptualisation and implementation of the right to good administration. In that regard, the unavoidable (slow) pace of administrative decision-making within that paradigm worked to foster good administration, in that the necessary delay between the start of an administrative procedure and the adoption of the relevant (human-exclusive) decision created space (and time) for the exercise of individual rights. Within that paradigm, even faulty approaches to decision-making (e.g., by an official or a branch of the public administration) would have limited effects as a result of constraints on the volume of decisions capable of adoption before a (successful) challenge forced a change of approach. The specific procedural rights (to access the file, to be heard, to obtain reasons for decisions, to challenge them) underpinning the current incarnation of the right to good administration are premised on such paradigm of individualised decision-making (see above Section 2.1).
To be sure, the standardisation of administrative processes and the increased processing capabilities of information and communication technologies (ICT) already exerted pressure on such paradigm, as the cost and the delay of processing information reduced and, as a direct consequence, the volume of (individual) decisions that could be created by a single (still) human decision-maker increased (Dunleavy et al. 2006). However, the threshold for “mass effects” was arguably not crossed until data-driven approaches and the adoption of algorithms, including AI, to support or automate decision-making have become commonplace. This has suddenly changed the relevant paradigm. In the new paradigm, there is little constraint on the volume of (individual) decisions that can be made through human–machine collaboration, or through complete automation. The challenge here is not (primarily) on how to adapt the existing procedural rights because they make little (pragmatic) sense in the context of instant (automated) decision-making. Once the data has been chosen, collected, and structured, and once the algorithm has been chosen (trained and tested), there is barely any delay between the start of a digitalised decision-making process and the generation of the relevant algorithmic output (decision). This renders most specific rights either very difficult to implement or largely irrelevant, as decision-making largely becomes a fait accompli. Specific decisions in relation to the data and the algorithm will pre-determine the relevant decisions. Crucially, given the level of centralisation in decision-making and the negligible marginal costs of each additional decision, techno-organisational decisions preceding the adoption of the (individual) decisions by the relevant supported or automated process can thus irretrievably translate into breaches of the right to good administration (as well as other fundamental rights) of many citizens at once, all “with a simple click of the mouse” so to speak.
In my view, the mass effects generated by decision-making supported or automated through digital technologies constitute the most distinctive feature and the most crucial challenge for the adaptation of good administration duties in the new paradigm of digital public governance. However, the challenges in ensuring individual guarantees derived from the right to good administration in a mass decision-making setting are rarely acknowledged, although there are some exceptions, for example, in relation to the right to access the file (EU Agency for Fundamental Rights 2020). I argue that the focus should be on tackling the issue of mass effects. This would require a dual approach. First, it would require seeking to minimise the risk of such (negative) mass effects materialising through intense scrutiny and testing of the relevant technical solutions pre-deployment (Bello y Villarino 2023). Second, it would require creating proactive duties incumbent upon the public administration to undo such (negative) mass effects so that reversing or compensating for the effects of the supported or automated decision-making does not depend on the ability of the affected citizens to identify and challenge this situation.
Conceptually, this would require both an extension of the right to good administration to phases of decision-making that are not yet directly relevant to the individual (Section 4), as well as the broadening of good administration guarantees to a collective dimension to account for the new risks arising in the AI-driven administrative context and to avoid those risks being internalised by those at the receiving end of the decision-making (Section 5). Whether it would be possible to implement these adaptations on the basis of Articles 41 and 47 CFR, as they stand, could be a relevant consideration in order to implement this proposal. However, in my view, the individualistic logic of the system (above Section 2.1) makes it nigh impossible, and an explicit reform of the CFR would be preferable, in my view. In any case, the remainder of the paper will not be concerned with this level of technical considerations.

4. Ex Ante Control of Organisational Risk-Taking

At its core, the adoption of digital technologies to support or automate public sector decision-making implies organisational risk-taking and, as things stand, this decision can be made without the public sector having to consider (or internalise) the significant externalities that the decision can impose on those at the receiving end of the decision-making process. Given the potential mass effects of discrete techno-organisational decisions discussed above (Section 3), it is not acceptable, or commensurate with the levels of protection desirable in systems of human and fundamental rights, to expect large numbers of citizens—or specific minorities or groups disproportionately impacted by (biased) decision-making—to have to rely on individualised ex post challenges to the implementation of those techno-organisational decisions. The right to good administration—or the mirroring duty of good administration incumbent on the public sector—must encompass a proactive and thorough ex ante assessment of the likely impact of techno-organisational decisions on the ability of the public sector user to respect individual rights when deploying AI. Such assessment needs to take place at the point of organisational risk-taking: or, in other words, ahead or in anticipation of the technological deployment.
In my view, such assessment of the likely (in)compatibility of a planned technological deployment with individual rights needs to be undertaken by an institution with sufficient independence and domain expertise, which rules out a self-assessment by the public sector user and/or its technology providers. Even if the relevant fundamental rights impact assessment was published in full and subjected to public consultation or contestation, there is no guarantee that the process would result in a sufficiently robust control of the planned technological deployment. Both the public sector user and the technology provider would have incentives to gloss over important fundamental rights issues, or could behave strategically in terms of information disclosure or the interpretation of the impact assessment at the later stage of technological deployment. For it to be effective, an ex ante control of the likely impact of a technological deployment on fundamental rights and its broader alignment with the relevant goals of digital regulation, thus, needs to be implemented through a system of licencing or permissioning of public sector AI use. In relation to the public procurement procedures that will usually operate as the conduit for the acquisition of such digital technologies (unless they are developed in-house), I have developed elsewhere a proposal for the system to be managed by an “AI in the Public Sector Authority” (AIPSA) (Sanchez-Graells 2024a) (along similar lines see Martín Delgado 2022; Gavaghan et al. 2019).
To foster the effectiveness of such a system of ex ante control and permissioning of the adoption of digital technologies to support or automate public sector decision-making, the right to good administration would need to be broadened so that it encompasses a right to enforce such licencing mechanism against any planned or implemented AI deployment by the public sector, which is an alternative, but complementary approach to disclosure-based proposals (see, e.g., Smuha 2021; Laux et al. 2023). The right could be framed in negative terms, such as an individual right not to be affected by administrative decisions resulting from the use of unlicenced systems or systems violating the terms of the relevant licence, which would be a variation of the right not to be subjected to automated decision-making, as it would not challenge the what is done, but how AI is deployed by the public sector. The risk of nullity of all relevant administrative decisions, coupled with the obligation to proactively compensate for their negative effects, would work as effective deterrents against the unlicenced adoption of digital technologies by public sector users.
The need to facilitate oversight before mass effects are created has already been stressed in the existing literature, e.g., arguing for the need to facilitate the early review of decisions to adopt AI by setting aside considerations of the “non-regulatory nature of the administrative process in civil law systems, or the ripeness doctrine in common law systems” (Kouroutakis 2023, p. 12). Along similar lines, the need to consider whether express legislative authorisation for the use of ADM technologies may be necessary or desirable has also been stressed (Miller 2023) as an opportunity for a broader assessment of the planned AI deployment and its socio-technical characteristics. The proposal here charts an intermediate path that can be complementary of both such approaches, inasmuch as it seeks to establish a system of ex ante control and permissioning but not at the legislative level, and with the primary goal of avoiding oversight dependency on the viability of (or initiative for) a (judicial) challenge. The crucial aspect of the proposal from the good administration perspective would be that the mechanisms for the public enforcement of the permissioning system would be strengthened by the parallel mechanism based on individual rights under a revised Article 41 CFR. To further facilitate the enforcement of such individual rights, it would be advisable to consider the possibility of their exercise in a collective manner, e.g., through representative institutions. A detailed assessment of those possibilities exceeds the scope of this paper.

5. Ex Post Automated Redress Duty

It is also increasingly accepted that the automation of decision-making and the mass effects that can result from a single techno-organisational decision pose significant challenges to existing remedy systems (e.g., Jan (2023b)). The importance of remedies in ensuring the proper use of AI systems has been receiving increasing attention, including the role of redress as a mechanism to enhance human agency in AI-dominated decision-making environments (Fanni et al. 2023). This can lead to proposals to, e.g., decouple AI adoption from its mass effects by requiring human involvement in the review of challenges against the initial (automated) decision. However, in my view, assessing redress in the context of mass administrative decision-making seems to require a slightly different (albeit complementary) approach.
In a context of mass decision-making, it is easy to see how the tribunals and courts could quickly become overwhelmed and ineffective if they had to deal with thousands or even hundreds of thousands of claims arising from a single techno-organisational decision (e.g., the implementation of a faulty algorithm in any core digital government service to do with taxation or social security). Given the growing interconnectedness of administrative procedures through the multiple uses of data points and increasing data interconnections or feedforward processes in public sector “data lakes”, it is also increasingly clear that the outputs of a techno-organisational solution (e.g., a flag of potential social benefit fraud) can then “snowball” through an increasingly interconnected and data-driven public administration (e.g., to trigger further flags in relation to other administrative procedures), thus further increasing the volume and variety of harms, damages, and complaints that can arise from a single AI deployment (see, e.g., Widlak et al. 2021). This further compounds the mass effects of supported or automated decision-making processes, as the increase in scale of the potential negative impacts not only concerns a plurality of citizens, but also a plurality of interests by any single citizen. The more centralized or interconnected the public sector, the higher the risk of disproportionate effects arising from faulty supported or automated decision-making processes.
Equally, it is increasingly accepted that there are social interests (e.g., in the proper functioning of the public administration as a crucial element in citizens’ assessments of the functioning of the State and the underlying constitutional settlement) that are not amenable to the current system of individual redress (Smuha 2021). Either because the related incentives do not operate in favour of enforcing any existing checks and balances (e.g., where the individual interest is relatively small and would thus not “activate” individual claims), or because the erosion of social interests is the result of compounded techno-organisational processes with interactive effects in the long run that cannot be separately challenged effectively (Yeung 2019, pp. 42, 75). This poses a major difficulty and also risks undermining confidence in the administrative justice system more generally (along the same lines Meers et al. 2023; Tomlinson et al. 2023).
While ex ante controls on the adoption of AI by the public sector (as above, Section 4) should reduce the likelihood or frequency of such mass and/or collective and social harms, they would not be altogether excluded. It is thus necessary to think about ways to tackle the issue. In my view, a broadening of the right to good administration to encompass a proactive duty on the public administration using an AI deployment to undo the harms arising from techno-organisational decisions would go some way in that regard (similarly, Widlak et al. 2021). A public administration that was put on notice of a (potential) harm arising from an AI deployment would immediately become duty-bound to (a) suspend or discontinue the use of the AI, and (b) proactively redress the situation for everyone affected without the need for any individual claims. To facilitate this, the existence of a mechanism to discontinue the technical deployment and adequate records of the effects and outputs it has generated would be required and would need to be established as conditions for the permission to use the relevant technology (above Section 4). The user public administration put on notice would also be (c) under a duty to report to the licencing or permissioning authority (AIPSA) so that relevant duties to revisit the assessment of equivalent or compounded AI deployments potentially affected by the same problem are triggered. All public authorities using such AI deployments would be under (d) a duty to collaborate in the efforts to proactively undo the damage and to “fix the system” going forward. For this to be implemented, there should be adequate records and inventories on the use of data and digital technology solutions across the public sector, which would require the creation of registries more comprehensive than, e.g., the narrow registers of high-risk AI use foreseen in the EU AI Act.
The implementation of this proposal would require a modification of Articles 41 and 47 CFR, as well as, most likely, the creation of additional legislation at domestic level. A detailed assessment of those implementation issues exceeds the possibilities of this paper.

6. Conclusions

In this paper, I have stressed the “individual rights logic” underpinning the promotion of good administration, which is a common basis of, e.g., OECD jurisdictions. By focusing on European legislation, I have shown how the limitations of the current incarnation of the right to good administration in the CFR will not be sufficiently addressed through the adoption of the EU AI Act or by the adoption of the CoE AI Convention. The emerging “European approach” to reshaping good administration for digital public governance is significantly constrained due to the threshold issues that result from having placed the regulatory focus on high-risk AI uses, as well as by the absence of strictly enforceable individual rights. This will do little to address the broader gaps in the regulation of good administration.
Such gaps are particularly visible when the focus is put on the mass effects that the digitalisation of public sector decision-making is bound to generate. Such mass effects risk depriving good administration rights from any practical effect where supported or automated decision-making systems present administrative decisions as a fait accompli and where the sheer numbers of potentially affected citizens and related (snowball) claims are bound to overwhelm existing mechanisms for the review or appeal of those decisions.
To try to address the challenge of mass effects, I have put forward a proposal to both extend and broaden good administration rights under the CFR. First, I have proposed an extension of the right to good administration to the control of techno-organisational decisions, that is, to (preparatory) phases of administrative decision-making that are not yet directly relevant to the individual. The new (extended) right would be a mechanism to reinforce the creation of a mechanism of external oversight of public sector adoption of digital technologies through ex ante permissioning or licencing (Sanchez-Graells 2024a). It would consist of an individual right not to be affected by administrative decisions resulting from the use of unlicenced systems or systems violating the terms of the relevant licence. Second, I have proposed a broadening of the existing right to obtain redress for the damages arising from defective decision-making, which would have to encompass a proactive duty on the public administration to undo the damage arising from supported or automated decision-making, as well as a duty of inter-administrative cooperation to mitigate or compensate for consequential damages (or risks) arising from the increased centralisation and interconnection of a data-driven public sector.
In my view, both interventions are closely interrelated and, if implemented properly, could offer significant mitigations of the risks inherent in the digitalisation of the public sector. A final reflection is that such interventions—or equivalent ones proposed by others—are urgent. While the legislative framework adapts at a slow pace, the public sector is quickly accumulating a stock of data and digital technology supported or automated decision-making solutions that will be very difficult to dismantle once it gets embedded. Even if it can be dismantled, such dismantling will be very costly. More importantly, the accelerating process of digital transformation is currently externalising significant risks on citizens and, most likely, disproportionately on the most vulnerable citizens. This mere fact is in itself a threat to the existing obligations to protect and promote a broad array of human and fundamental rights. We should not need to wait for the next big scandal to happen before we take decisive action. Much like the EU has been keen to be a trendsetter in the regulation of (some uses of) AI, it should also be willing to be a trendsetter in shaping good administration for the new digital governance paradigm.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

This paper builds on the preliminary thoughts presented at the symposium on “Safeguarding the Right to Good Administration in the Age of AI”, part of the III DigiCon Conference, held at the European University Institute on 19–20 October 2023. I am grateful to the symposium convenors Simona Demková, Melanie Fink, and Giulia Gentile, to Filipe Brito Bastos and Marco Almada and all other participants for very thought-provoking discussions. I am furthermore grateful to Marco Almada for additional comments on an early draft of this paper. I am also thankful to Colin Gavaghan for the invitation to contribute to this special issue and for a broad array of helpful conversations Any remaining errors are solely my responsibility. Comments and feedback are welcome.

Conflicts of Interest

The author declares no conflicts of interest.
1
Executive Order 14110, 30 October 2023. Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Available at https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence (accessed on 18 December 2023).
2
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data [2016] OJ L119/1. Available at: https://eur-lex.europa.eu/eli/reg/2016/679/oj (accessed on 18 December 2023).
3
Future of Life Institute. The EU Artificial Intelligence Act. Up-to-date developments and analyses of the EU AI Act. Available online: https://artificialintelligenceact.eu/ (accessed on 18 December 2023).
4
As evidenced, e.g., in the 2023 edition of the OECD Principles of Good Administration. Available at: https://www.sigmaweb.org/publications/Principles-of-Public-Administration-2023.pdf (accessed on 19 December 2023).
5
Charter of Fundamental Rights of the European Union [2016] OJ C202/389. Available at: http://data.europa.eu/eli/treaty/char_2016/oj (accessed on 19 December 2023).
6
Council of Europe. 2018. The Administration and You. Principles of administrative law concerning relations between individuals and public authorities. Available at: https://rm.coe.int/eng-handbook-on-administration/1680a03ee2 (accessed on 19 December 2023). Although the CoE Principles are not underpinned by a specific right to good administration in the European Convention on Human Rights (ECHR), they have been developed in cases involving administrative decision-making affecting ECHR rights. The CoE Principles are thus a non-binding authoritative source for the interpretation of the right to good administration in Article 41 CFR. Additional guidance can be found in European Commission. 2017. Quality of Public administration. A toolbox for practitioners. Available at: https://ec.europa.eu/social/main.jsp?catId=738&langId=en&pubId=8055&type=2&furtherPubs=no (accessed on 19 December 2023).
7
Council of Europe. 2023. Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (2nd reading, CAI(2023)28). Available at: https://www.coe.int/en/web/artificial-intelligence/cai (accessed on 19 December 2023).
8
E.g., Case C-219/20, Bezirkshauptmannschaft Hartberg-Fürstenfeld (Délai de prescription), ECLI:EU:C:2022:89, paragraph 37; and Joined Cases C-225/19 and C-226/19, Minister van Buitenlandse Zaken, ECLI:EU:C:2020:951, paragraph 34.
9
See note 3 above. For an analysis conducted while revising the final version of this text, see (Sanchez-Graells 2024b).
10
E.g., Case C-634/21, SCHUFA Holding (Scoring), ECLI: EU:C:2023:957.
11
See note 7 above.

References

  1. Abrusci, Elena, and Richard Mackenzie-Gray Scott. 2023. The questionable necessity of a new human right against being subject to automated decision-making. International Journal of Law and Information Technology 31: 114–43. [Google Scholar] [CrossRef]
  2. AI Safety Summit. 2023. The Bletchley Declaration by Countries Attending the AI Safety Summit. November 1–2. Available online: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 (accessed on 18 December 2023).
  3. Bello y Villarino, José-Miguel. 2023. A Tale of Two Automated States. Why a One-Size-Fits-All Approach to Administrative Law Reform to Accommodate AI Will Fail. In Money, Power, and AI. Automated Banks and Automated States. Edited by Zofia Bednarz and Monika Zalnieriute. Cambridge: Cambridge University Press, pp. 136–51. [Google Scholar]
  4. Bertuzzi, Luca. 2023. European Union Squares the Circle on the World’s First AI Rulebook. Available online: https://www.euractiv.com/section/artificial-intelligence/news/european-union-squares-the-circle-on-the-worlds-first-ai-rulebook/ (accessed on 19 December 2023).
  5. Bertuzzi, Luca. 2024. Tug of War Continues on International AI Treaty as Text Gets Softened Further. Available online: https://www.euractiv.com/section/artificial-intelligence/news/tug-of-war-continues-on-international-ai-treaty-as-text-gets-softened-further/ (accessed on 2 February 2024).
  6. Binns, Reuben. 2021. Analogies and Disanalogies Between Machine-Driven and Human-Driven Legal Judgement. Computational and Text-Driven Law 1: 1–12. [Google Scholar]
  7. Carney, Terry. 2023. The Automated Welfare State. Challenges for Socioeconomic Rights of the Marginalised. In Money, Power, and AI. Automated Banks and Automated States. Edited by Zofia Bednarz and Monika Zalnieriute. Cambridge: Cambridge University Press, pp. 95–115. [Google Scholar]
  8. Chevalier, Emiliem, and Eva Ma Menéndez Sebastián. 2022. Digitalisation and Good Administration Principles. European Review of Digital Administration & Law 3: 5–8. [Google Scholar]
  9. Coglianese, Cary. 2023. Law and Empathy in the Automated State. In Money, Power, and AI. Automated Banks and Automated States. Edited by Zofia Bednarz and Monika Zalnieriute. Cambridge: Cambridge University Press, pp. 173–88. [Google Scholar]
  10. Coglianese, Cary, and Alicia Lai. 2022. Algorithm vs. Algorithm. Duke Law Journal 71: 1281–340. [Google Scholar]
  11. Corder, Hugh. 2020. A Right to Administrative Justice ‘New’ or Just Repackaging the Old? In The Cambridge Handbook of New Human Rights. Edited by Andreas von Arnauld, Kerstin von der Decken and Mart Susi. Cambridge: Cambridge University Press, pp. 491–514. [Google Scholar]
  12. Council of the EU. 2023. Artificial Intelligence Act: Council and Parliament Strike a Deal on the First Rules for AI in the World. Available online: https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/ (accessed on 19 December 2023).
  13. Craig, Paul. 2021. Article 41. The Right to Good Administration. In The EU Charter of Fundamental Rights. A Commentary. Edited by Steve Peers, Tamara Hervey, Jeff Kenner and Angela Ward. Oxford: Beck–Hart–Nomos, pp. 1125–52. [Google Scholar]
  14. Craig, Paul, Herwig C. H. Hofmann, Jens-Peter Schneider, and Jacques Ziller. 2015. ReNEUAL Model Rules on EU Administrative Procedure. Oxford: Oxford University Press. [Google Scholar]
  15. Curtis, Caitlin, Nicole Gillespie, and Steven Lockey. 2023. AI-deploying organizations are key to addressing ‘perfect storm’ of AI risks. AI and Ethics 3: 145–53. [Google Scholar] [CrossRef] [PubMed]
  16. Cutts, Tatiana. 2023. Supervising Automated Decisions. In Money, Power, and AI. Automated Banks and Automated States. Edited by Zofia Bednarz and Monika Zalnieriute. Cambridge: Cambridge University Press, pp. 205–20. [Google Scholar]
  17. Demetzou, Katerina, Sebastião Barros Vale, and Gabriela Zanfir-Fortuna. 2023. The thin red line: Refocusing data protection law on ADM, a global perspective with lessons from case-law. Computer Law & Security Review 49: 105806. [Google Scholar]
  18. Demková, Simona. 2023a. Automated Decision-Making and Effective Remedies. The New Dynamics in the Protection of EU Fundamental Rights in the Area of Freedom, Security and Justice. Cheltenham: Edward Elgar. [Google Scholar]
  19. Demková, Simona. 2023b. The EU’s Artificial Intelligence Laboratory and Fundamental Rights. In Redressing Fundamental Rights Violations by the EU: The Promise of the ‘Complete System of Remedies’. Edited by Melanie Fink. Cambridge: Cambridge University Press. Available online: https://ssrn.com/abstract=4566098 (accessed on 18 December 2023).
  20. Demková, Simona, and Herwig C. H. Hofmann. 2022. General principles of procedural justice. In Research Handbook on General Principles in EU Law. Edited by Katja S. Ziegler, Päivi J. Neuvonen and Violeta Moreno-Lax. Cheltenham: Edward Elgar, pp. 209–26. [Google Scholar]
  21. Demková, Simona, Melanie Fink, and Giulia Gentile. 2023. The Digital Future of European Public Administration: Introduction to the Symposium on Safeguarding the Right to Good Administration in the Age of AI. The Digital Constitutionalist. Available online: https://digi-con.org/the-digital-future-of-european-public-administration-introduction-to-the-symposium-on-safeguarding-the-right-to-good-administration-in-the-age-of-ai/ (accessed on 18 December 2023).
  22. Dunleavy, Patrick, and Helen Margetts. 2023. Data science, artificial intelligence and the third wave of digital era governance. Public Policy and Administration, ahead of print. [Google Scholar] [CrossRef]
  23. Dunleavy, Patrick, Helen Margetts, Simon Bastow, and Jane Tinkler. 2006. Digital Era Governance: IT Corporations, the State, and e-Government. Oxford: Oxford University Press. [Google Scholar]
  24. Esko, Terhi, and Riikka Koulu. 2023. Imaginaries of better administration: Renegotiating the relationship between citizens and digital public power. Big Data & Society 10: 1–14. [Google Scholar] [CrossRef]
  25. EU Agency for Fundamental Rights. 2020. Getting the Future Right. Artificial Intelligence and Fundamental Rights. Available online: https://fra.europa.eu/en/publication/2020/artificial-intelligence-and-fundamental-rights (accessed on 19 December 2023).
  26. EU Ombudsman. 2002. The European Code of Good Administrative Behaviour. Available online: https://www.ombudsman.europa.eu/sv/publication/en/3510 (accessed on 19 December 2023).
  27. European Parliament. 2023. Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI. Available online: https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai (accessed on 19 December 2023).
  28. Fanni, Rosanna, Valerie Eveline Steinkogler, Giulia Zampedri, and Jo Pierson. 2023. Enhancing human agency through redress in Artificial Intelligence Systems. AI & Society 38: 537–47. [Google Scholar]
  29. Fenger, Menno, and Robin Simonse. 2024. The Implosion of the Dutch Surveillance Welfare State. Social Policy and Administration, ahead of print. [Google Scholar] [CrossRef]
  30. Finck, Michèle. 2020. Automated Decision-Making and Administrative Law. In The Oxford Handbook of Comparative Administrative Law. Edited by Peter Cane, Herwig C. H. Hofmann, Eric C. Ip and Peter L. Lindseth. Oxford: Oxford University Press, pp. 656–76. [Google Scholar]
  31. Fink, Melanie, and Michele Finck. 2022. Reasoned A(I)dministration: Explanation requirements in EU law and the automation of public administration. European Law Review 47: 376–92. [Google Scholar]
  32. Gavaghan, Colin, Alistair Knott, James Maclaurin, John Zerilli, and Joy Liddicoat. 2019. Government Use of Artificial Intelligence in New Zealand. Available online: https://ourarchive.otago.ac.nz/bitstream/handle/10523/9372/NZLF%20report.pdf (accessed on 19 December 2023).
  33. Gentile, Giulia. 2023. Between Online and Offline Due Process: The Digital Services Act. In New Directions in Digitalisation: Perspectives from EU Competition Law and the Charter of Fundamental Rights. Edited by Annegret Engel and Xavier Groussot. Heidelberg: Springer. Available online: https://ssrn.com/abstract=4550655 (accessed on 18 December 2023).
  34. Hofmann, Hervig C. H., and Bucura C. Mihaescu. 2013. The Relation between the Charter’s Fundamental Rights and the Unwritten General Principles of EU Law: Good Administration as the Test Case. European Constitutional Law Review 9: 73–101. [Google Scholar] [CrossRef]
  35. Jan, Benjamin. 2023a. Can the Duty of Care Be Complied With in the Algorithmic State? The Digital Constitutionalist. Available online: https://digi-con.org/can-the-duty-of-care-be-complied-with-in-the-algorithmic-state/ (accessed on 19 December 2023).
  36. Jan, Benjamin. 2023b. Safeguarding the Right to an Effective Remedy in Algorithmic Multi-Governance Systems: An Inquiry in Artificial Intelligence-Powered Informational Cooperation in the EU Administrative Space. Review of European Administrative Law 16: 9–36. [Google Scholar]
  37. Joint Research Centre AI Watch. 2022. European Landscape on the Use of Artificial Intelligence by the Public Sector. Available online: https://ai-watch.ec.europa.eu/publications/ai-watch-european-landscape-use-artificial-intelligence-public-sector_en (accessed on 18 December 2023).
  38. Kaminski, Margot E. 2023. Regulating the risks of AI. Boston University Law Review 103: 1347–411. [Google Scholar] [CrossRef]
  39. Kouroutakis, Antonios E. 2023. Public Data, AI Applications and the Transformation of the State: Contemporary Challenges to Democracy. Available online: https://ssrn.com/abstract=4569189 (accessed on 18 December 2023).
  40. Kuziemski, Maciej, and Gianluca Misuraca. 2020. AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications Policy 44: 101976. [Google Scholar] [CrossRef]
  41. Laukyte, Migle. 2022. Averting enfeeblement and fostering empowerment: Algorithmic rights and the right to good administration. Computer Law & Security Review 46: 105718. [Google Scholar]
  42. Laux, Johann, Sandra Wachter, and Brent Mittelstadt. 2023. Three Pathways for Standardisation and Ethical Disclosure by Default under the European Union Artificial Intelligence Act. Available online: https://ssrn.com/abstract=4365079 (accessed on 19 December 2023).
  43. Lock, Tobias. 2019. Article 41 CFR Right to good administration. In The EU Treaties and the Charter of Fundamental Rights: A Commentary. Edited by Manuel Kellerbauer, Marcus Klamert and Jonathan Tomkin. Oxford: Oxford University Press, pp. 2204–7. [Google Scholar]
  44. Maham, Pegah, and Sabrina Küspert. 2023. Governing General Purpose AI. A Comprehensive Map of Unreliability, Misuse and Systemic Risks. Stiftung Neue Verantwortung. Available online: https://www.stiftung-nv.de/sites/default/files/snv_governing_general_purpose_ai_pdf.pdf (accessed on 18 December 2023).
  45. Marshall, Paul. 2022. Scandal at the Post Office: The Intersection of Law, Ethics and Politics. Digital Evidence and Electronic Signature Law Review 19: 12–28. [Google Scholar] [CrossRef]
  46. Martín Delgado, Isaac. 2022. Automation, Artificial Intelligence and Sound Administration. A Few Insights in the Light of the Spanish Legal System. European Review of Digital Administration & Law 3: 9–30. [Google Scholar]
  47. Meers, Jed, Simon Halliday, and Joe Tomlinson. 2023. Why we need to rethink procedural fairness for the digital age and how we should do it. In Research Handbook on Law & Technology. Edited by Bartosz Brozeck, Olia Kanevskaia and Przemysław Pałka. Cheltenham: Edward Elgar, pp. 468–82. [Google Scholar]
  48. Miller, Paul. 2023. A New ‘Machinery of Government’? The Automation of Administrative Decision-Making. In Money, Power, and AI. Automated Banks and Automated States. Edited by Zofia Bednarz and Monika Zalnieriute. Cambridge: Cambridge University Press, pp. 116–35. [Google Scholar]
  49. Ministry of Foreign Affairs of Japan. 2023. G7 Leaders’ Statement on the Hiroshima AI Process. Available online: https://www.mofa.go.jp/ecm/ec/page5e_000076.html (accessed on 18 December 2023).
  50. OECD. 2023. Updates to the OECD’s Definition of an AI System Explained. Available online: https://oecd.ai/en/wonk/ai-system-definition-update (accessed on 19 December 2023).
  51. OECD.AI. 2023. Policy Observatory. Available online: https://oecd.ai/en/policy-areas (accessed on 18 December 2023).
  52. Ranchordás, Sofia. 2022. Empathy in the Digital Administrative State. Duke Law Journal 71: 1341–89. [Google Scholar] [CrossRef]
  53. Royal Commission into the Robodebt Scheme. 2023. Final Report. Available online: https://robodebt.royalcommission.gov.au/publications/report (accessed on 3 February 2024).
  54. Ryan-Mosley, Tate. 2023. Why the EU AI Act Was So Hard to Agree on. Three Key Issues That Jeopardized the EU AI Act. MIT Technology Review. Available online: https://www.technologyreview.com/2023/12/11/1084849/why-the-eu-ai-act-was-so-hard-to-agree-on/ (accessed on 3 February 2024).
  55. Røhl, Ulrik Bisgaard Ulsrod. 2022. Automated, Administrative Decision-making and Good Administration. Friends, Foes or Complete Strangers? Ph.D thesis, University of Aalborg, Aalborg, Denmark. [Google Scholar] [CrossRef]
  56. Sanchez-Graells, Albert. 2024a. Digital Technologies and Public Procurement. Gatekeeping and Experimentation in Digital Public Governance. Oxford: Oxford University Press. [Google Scholar]
  57. Sanchez-Graells, Albert. 2024b. Public Procurement of Artificial Intelligence: Recent Developments and Remaining Challenges in EU Law. LTZ (Legal Tech Journal) 2/2024. Available online: https://ssrn.com/abstract=4706400 (accessed on 3 February 2024).
  58. Sinclair, Alexandra J. 2023. A Tale of Two Systems: An Account of Transparency Deficits in the Use of Machine Learning Algorithms to Detect Benefit Fraud in the UK and The Netherlands. The Digital Constitutionalist. Available online: https://digi-con.org/a-tale-of-two-systems-an-account-of-transparency-deficits-in-the-use-of-machine-learning-algorithms-to-detect-benefit-fraud-in-the-uk-and-the-netherlands/ (accessed on 19 December 2023).
  59. Smuha, Nathalie A. 2021. Beyond the individual: Governing AI’s societal harm. Internet Policy Review 10: 1–32. [Google Scholar] [CrossRef]
  60. Suksi, Markku. 2023. The Rule of Law and Automated Decision-Making. Exploring Fundamentals of Algorithmic Governance. Heidelberg: Springer. [Google Scholar]
  61. Sunstein, Cass R. 2022. Governing by Algorithm? No Noise and (Potentially) Less Bias. Duke Law Journal 71: 1175–205. [Google Scholar] [CrossRef]
  62. Tomlinson, Joe, Eleana Kasoulide, Jed Meers, and Simon Halliday. 2023. Whose procedural fairness? Journal of Social Welfare and Family Law 45: 278–93. [Google Scholar] [CrossRef]
  63. Van Kolfschooten, Hannah, and Carmel Shachar. 2023. The Council of Europe’s AI Convention (2023–2024): Promises and pitfalls for health protection. Health Protection 138: 104935. [Google Scholar] [CrossRef] [PubMed]
  64. Wachter, Sandra. 2022. The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law. Tulane Law Review 97: 149–204. [Google Scholar] [CrossRef]
  65. Widlak, Arjan, Marlies van Eck, and Rik Peeters. 2021. Towards principles of good digital administration: Fairness, accountability and proportionality in automated decision-making. In The Algorithmic Society. Technology, Power, and Knowledge. Edited by Marc Schuilenburg and Rik Peeters. Abingdon-on-Thames: Routledge, pp. 67–84. [Google Scholar]
  66. Wolswinkel, Johan. 2022. Comparative Study on Administrative Law and the Use of Artificial Intelligence and Other Algorithmic Systems in Administrative Decision-Making in the Member States of the Council of Europe. Available online: https://coe.int/documents/22298481/0/CDCJ%282022%2931E+-+FINAL+6.pdf/4cb20e4b-3da9-d4d4-2da0-65c11cd16116?t=1670943260563 (accessed on 19 December 2023).
  67. Wróbel, Izabela. 2022. Artificial intelligence systems and the right to good administration. Review of European and Comparative Law 49: 203–23. [Google Scholar] [CrossRef]
  68. Yeung, Karen. 2019. A Study of the Implications of Advanced Digital Technologies (including AI Systems) for the Concept of Responsibility within a Human Rights Framework. Available online: https://rm.coe.int/a-study-of-the-implications-of-advanced-digital-technologies-including/168096bdab (accessed on 19 December 2023).
  69. Yeung, Karen. 2022. The New Public Analytics as an Emerging Paradigm in Public Sector Administration. Tilburg Law Review 27: 1–32. [Google Scholar] [CrossRef]
  70. Zerilli, John. 2023. Process Rights and the Automation of Public Services through AI: The Case of the Liberal State. Just Security. Available online: https://www.justsecurity.org/89758/process-rights-and-the-automation-of-public-services-through-ai-the-case-of-the-liberal-state/ (accessed on 18 December 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.