1. Introduction
We are currently undergoing a period of profound and interrelated transformations that are reshaping the social, cultural, institutional, and economic dimensions of human life. These changes—many already underway and others imminent—are expected to generate wide-ranging global impacts. Among them, the digital transition, propelled by the rapid evolution of artificial intelligence (AI), stands as particularly significant (
Stuart and Norvig 2016).
AI is catalysing fundamental shifts in reasoning processes, analytical paradigms, and operational systems. It is redefining human experience, reshaping decision-making mechanisms, and altering the institutional, economic, and ecological frameworks within which societies function. In the public sector, AI holds the promise of revolutionising how governments, administrative agencies, and judicial institutions operate (
Galetta 2025;
Rangone 2023;
Sourdin 2021;
Boughey and Miller 2021). By enabling the processing of vast datasets with unprecedented speed and complexity, AI offers the potential to enhance decision accuracy, increase administrative efficiency, and enable the delivery of more responsive, evidence-informed public services, amongst other things (
Paul et al. 2024;
Surden 2019).
However, the advancement of such technological capacities also raises pressing ethical, legal, and epistemological challenges. The integration of AI into public governance structures raises intricate questions regarding foundational legal principles such as accountability, transparency, due process, and democratic legitimacy. These developments compel legal scholars and practitioners to interrogate how law can continue to function as a safeguard for fundamental rights and democratic values in an age defined by technological disruption (
Fernandez 2023).
This context demands a broader re-evaluation of the role of law amid ongoing innovation. As scientific and technical rationalities increasingly dominate public decision-making, there is a question we must ask: does the law still possess the capacity to engage with and regulate these fast-changing realities, or is it at risk of obsolescence? If law retains such regulatory and normative capacity, an even more pressing question arises: can law actively shape societal transformation? In this light, public procurement—traditionally perceived as a technical or administrative mechanism—warrants re-examination. Might it instead serve as a strategic legal instrument capable of directing innovation and influencing the trajectory of structural change?
This study answers affirmatively. It argues that law holds transformative potential: a capacity not only to manage innovation and structural change, but also to help in directing it. When appropriately conceptualised and deployed, legal systems can effectively respond to the complexities of digital transformation while aligning with ethical imperatives and democratic norms. The argument is grounded both logically and empirically: if the law were incapable of adaptation, it would risk irrelevance in the face of technologically induced transformation. Thus, the law’s ability to shape change is not only instrumental to the public interest, since it is indeed vital to the legitimacy and continued relevance of legal systems themselves.
As we navigate this evolving landscape, it becomes necessary to articulate a foundational concept of transformative law. Alongside the traditional understanding of law as a normative system defined by internal coherence and formal structure, a complementary conception has emerged—one which views law as a functional instrument (
Unger 1996;
Kjaer 2021). This instrumental perspective sees law as a means of pursuing specific goals, rooted in positive law, based on principles deriving from the legal system, or gathered in constitutionally and internationally protected rights. Thus, the idea of transformative law does not overcome positive law, yet it builds on this foundation to articulate a broader, more dynamic vision. It frames law not simply as a regulatory tool but as a strategic architecture capable of fostering significant and durable transformations in social, economic, and institutional domains (
Kjaer 2022).
Transformative law engages proactively with emerging domains, reshaping legal objects and reconfiguring traditional areas of legal intervention. It enables law to transcend its classical boundaries and interact more directly with evolving social realities, thereby fostering a deeper and more sustained relationship between law and society (
Kampourakis 2022).
Against this conceptual backdrop, we now turn to consider whether this notion of transformative law can be extended to the domain of public procurement. Could public procurement serve not only to acquire innovative goods and services but also as a tool for governing and steering the digital transition, precisely in a way apt to determine lasting transformations in the existing institutional arrangements?
To address this, we draw upon the literature concerning public-sector interventions in markets, which suggests that strategically directed public demand can catalyse systemic transformations not only within the public sphere but also across the private sector. This may ultimately reshape broader economic and societal structures (
OECD 2017,
2025). Given the considerable share of public procurement in GDP, this literature holds that procurement contracts, when strategically designed, can support public policy objectives (
Andhov 2021).
Yet, this framework faces significant practical challenges, including those related to demand aggregation, procedural limitations, and the risks assumed by public administrators. If we wish to imbue public procurement with a truly transformative function, then the traditional procurement framework must be reimagined or, more precisely, operationalised in light of transformative legal theory (
Schooner and Piga 2022). As recently argued, the concept of transformative law in public procurement implies a departure from a narrow focus on isolated procedures and outcomes. Instead, it calls for the recognition of actions, initiatives, or collective behaviours that facilitate, reinforce, or stabilise systemic change (
Edler 2023).
Such a reframing not only necessitates the reconsideration of foundational principles within procurement law but also requires grappling with a range of legal and practical criticalities. This study examines how public contracts—particularly those related to artificial intelligence—may contribute to transformative shifts in State governance and considers the theoretical adjustments required to make such changes feasible. It analyses how the public procurement of artificial intelligence can significantly impact transformative change in the State, and focuses on the potential “regulatory” impact of public contracts in this area, particularly in shaping the nature and characteristics of artificial intelligence purchases. This, in turn, affects the qualitative dimension of the State’s transformative change resulting from such purchases, both prospectively and retrospectively. A strong correlation, therefore, emerges between the use of artificial intelligence to exercise public functions and the public contracts that determine a particular use of artificial intelligence.
2. Public Procurement of Artificial Intelligence: Towards Transformative Change
Assuming the legitimacy of employing artificial intelligence in the performance of public functions (as has generally been accepted, and as is now stated in the Artificial Intelligence Act (Regulation (EU) 2024/1689, AI Act)), and recognising that public entities typically lack the internal capacity to develop such systems autonomously, it becomes essential to ensure that procurement processes are designed to comply with the principles underpinning so-called algorithmic legality. These principles include, inter alia, the non-exclusivity of automated decision-making, algorithmic transparency, and safeguards against discrimination (
Civitarese Matteucci 2021).
Beyond these foundational guarantees, it is necessary to examine the specific activities, instruments, and, most critically, challenges through which public procurement can serve as a vehicle for meaningful and transformative change in the structure and functioning of the State.
In this respect, the role of AI in public procurement can be viewed in two main ways. First, AI may be regarded as a tool embedded within the procurement process itself, capable of enhancing its efficiency, integrity, and procedural development. Second, AI may constitute the object of procurement, with public administrations acquiring such systems with the explicit aim of transforming the execution of their institutional duties, functions, and services.
Given that AI outputs are necessarily characterised by specific structural, functional, and accountability-related features, the central legal question becomes how to ensure the achievement of these qualities. In such a framework, legal scholarship increasingly views public procurement not merely as an acquisition procedure but also as an essential regulatory tool in promoting such characteristics (
Rubenstein 2023). Procurement, in this sense, becomes a means of shaping the operability and effects of AI systems involved in the performance of public functions (
Adler 2025).
This perspective is notably reflected in the U.S. legal literature, where the public procurement process is reconstructed—at least as an initial framework—as a strategic tool for ensuring both the effective and responsible use of AI in the public sphere (
Coglianese and Lampmann 2021). Public procurement is thus also conceived as a politically meaningful process, enabling the democratic oversight of technical decisions that are often structurally opaque, both in their initial configuration and in their eventual outcomes (
Mulligan and Bamberger 2019).
Consequently, the relationship between public contracts and AI is increasingly interpreted in terms of guarantees. From a procedural standpoint, this is exemplified in the development of audit mechanisms and performance assessments specific to the carrying out of AI-related public contracts. Substantively, this relationship becomes especially significant when contract governance is deemed capable of ensuring that public decisions involving AI are endowed with certain normative qualities, including transparency, accuracy, and protection from potential discriminatory effects.
Comparable reflections have recently emerged in French legal scholarship, in which the potential of AI has been highlighted not only to improve the structuring and execution of public procurement procedures, thereby enhancing their integrity and reducing corruption risks, but also as a subject of regulation in its own right. Consequently, public procurement is increasingly seen as a means to acquire AI systems defined by rigorous qualitative standards, particularly through the ex ante specification of requirements in tender documents, the design of innovative procurement procedures, and the establishment of pre-deployment verification mechanisms (
Hasquenoph 2024).
In Italian legal scholarship, a similar line of reasoning has more recently been proposed, introducing a preliminary distinction between the use of artificial intelligence as a tool for the conduct of procurement procedures by public institutions and artificial intelligence as the object of public procurement procedures, with subsequent focus on the latter. On this basis, the groundwork has been laid for a comprehensive analysis of the foundations and limitations of public contract law as an instrumental mechanism for the regulation of artificial intelligence (
Licata 2024).
Such approaches are further reflected in the guidelines of prominent international institutions and national governments, which aim to facilitate the acquisition of AI technologies by the public sector while simultaneously promoting their normative alignment with well-defined principles. The European Commission, too, has emphasised the strategic use of public demand as a lever for fostering the responsible and value-based dissemination of AI technologies in multiple policy documents.
In conclusion, the conceptualisation of public AI contracts not only as objects of procurement but also as instruments of regulation is being increasingly consolidated in legal scholarship. Within the domain of public procurement, such contracts are now widely recognised as a means to pursue broader objectives that go beyond the immediate goals of the acquisition itself. These include the promotion of social rights, environmental sustainability, technological innovation, democratic accountability, and the governance of transitions. This shift underscores the transformative potential of public procurement in the age of artificial intelligence—a dynamic that is vividly captured in the most recent and advanced academic literature (
Edler 2023).
3. Critical Elements and Models
Adopting a conceptual framework that interprets public contracts as a tool of regulation brings to light a series of intrinsic critical elements. These become particularly pronounced in the domain of artificial intelligence, where the architecture of the regulatory framework remains inherently subject to constant evolution.
In this context, the traditional “unidirectional” configuration of public interest through public procurement appears increasingly questionable. Public purchasers of AI systems are obliged to comply with specific substantive principles when using the products they are buying, such as transparency, non-exclusivity of automated decision-making, and non-discrimination. Yet, these characteristics should be pursued through the procurement process itself and, from that starting point, public authorities are effectively called upon to (self-)regulate through the same contracts that they enter into. Such a paradigm, however, gives rise to latent tensions. For instance, algorithms optimised for greater accuracy—thus enhancing functional adequacy—tend to exhibit reduced transparency (
Hunsicker et al. 2025), thereby undermining one of the core principles that public buyers (as a part of State governance) are required to ensure (
Coglianese and Lehr 2019). Similarly, efforts to uphold the non-exclusivity of automation inevitably impact the extent of human involvement. If decisions made by machines still need to be effectively scrutinised, costly labour will be necessary; again, the existence of these conflicting interests within the processes of AI systems acquisition reveals that the public authority cannot maintain a position of neutrality concerning the regulatory shifts that it seeks to implement via procurement mechanisms (
Sanchez-Graells 2024c;
Licata 2024).
Additionally, the substantive quality of AI deployment often becomes apparent only during contract performance, rather than at the point of contract award. It is only at this stage that compliance with the underlying public interest principles can be effectively assessed. This consideration underscores the importance of the contract execution phase while simultaneously challenging the notion that public procurement can serve as a self-sufficient regulatory moment in relation to AI deployment within public functions. Even if procurement processes can play a pivotal role in establishing technical standards such as interoperability requirements, or in preventing vendor lock-in, they appear insufficient—in and of themselves—to guarantee substantively adequate, reliable, and harmonised regulatory outcomes in the context of public-sector AI (
Sanchez-Graells 2024c).
This centrality of contract performance introduces an additional layer of complexity, fundamentally related to the institutional capacity of public authorities to oversee not only procurement procedures but also the implementation phase. Beyond traditional legal concerns, like the management of intellectual property rights, organisational issues become decisive. Indeed, public procurement of AI must be situated within an institutional framework equipped to pursue and achieve its objectives. While conventional legal scholarship has typically explored organisational dimensions in terms of the contracting authority’s procedural capabilities, the procurement of AI demands a broader organisational readiness, especially one that is apt to supersede procedural competence and directly influence the effectiveness of AI adoption. Consider, for example, the role of data governance: Data, which are essential for AI functionality, must be accurately acquired, catalogued, and, more broadly, “administrated” (
Miranzo Diaz 2023). This necessity points to the importance of an overarching governance model and the establishment or reinforcement of specialised organisational units with the requisite expertise to address the multidimensional challenges at hand (
Delgado 2023).
Finally, procedural aspects require careful reconsideration. The issue is not limited to identifying the most suitable procurement procedure but extends to the post-award validation of AI systems before deployment, a requirement that now finds formal expression in the European AI Act. Although still underutilised, advanced procurement models such as innovation partnerships may prove particularly appropriate in this context, allowing for the co-development and pre-validation of AI solutions before their definitive acquisition (
Hasquenoph 2024).
4. Public and Private Interaction
Further critical elements must be identified within the relationship established between public purchasers and private operators, not only in the context of procurement processes but also in the post-purchase phase. The transformative change brought about by artificial intelligence should, in principle, be governed by public entities themselves. However, these entities are not always proficient in adequately assessing the extent of an effectively autonomous decision-making process. Private contractors often hold a position of technical superiority and, consequently, a contractual advantage that enables them, at the very least, to influence the content of contracts and the functional outcomes they produce (
Bradford 2023).
To address this issue, several public institutions, including the European Commission’s Public Buyers Community, have developed contractual models for public AI procurement. These models provide guidance on some of the most relevant challenges in AI procurement, with the aim of fostering transparent and trustworthy artificial intelligence systems (
European Commission, Public Buyers Community 2025). The European contractual clauses were created before the approval of the EU AI Act and have only recently been updated. Nonetheless, they largely retain their original structure, distinguishing between “high-risk” and “non-high-risk” systems and offering two corresponding sets of clauses: a “comprehensive” version for the former and a “lightweight” version for the latter. Still, the alignment between the European regulatory framework and these contractual models remains imperfect. For example, even the “lightweight” version includes requirements—such as system intelligibility and human oversight—that under the Regulation generally apply only to high-risk systems (
Sanchez-Graells 2024c;
Licata 2024).
More broadly, the actual usefulness of these instruments—at least in their current design—remains questionable when it comes to offering practical support for the regulation of AI systems procured by public entities. The substantive content of the clauses highlights the most significant shortcomings: They often lack clearly defined provisions and continue to refer either to future regulatory measures or to documentation and standards provided by private operators. The latter, in particular, raises the risk of public actors being “captured” by market operators who have defined those standards, or at least contributed to defining them.
Many clauses are formulated in broad, principle-based terms. As such, they are fundamentally inadequate to provide a complete regulatory framework or to ensure sufficiently precise outcomes. It is therefore foreseeable that only a limited number of highly qualified public purchasers will be capable of resisting the imposition of market-driven standards. This raises serious concerns regarding functionality, the protection of rights affected by AI-driven decision-making, and the risk of technical lock-in that could hinder future modifications of the systems in use (
Sanchez-Graells 2024a).
What can therefore be foreseen is that, in the context of public AI procurement, the transformative change induced by these contracts is embedded within a more complex relationship between public and private powers. As such, it becomes part of a broader process in which public contracting in the AI field takes on systemic characteristics that are not entirely detached from, yet not exclusively tied to, individual procurement procedures.
The future exercise of regulatory powers concerning artificial intelligence—now directly envisaged at the European level through the AI Act—may, and indeed should, provide significant operational tools in this respect. This is necessary to ensure the effective development of AI in the performance of public functions, while also guaranteeing outcomes of adequate quality. In this sense, the normative role of contracts—and public contracts in particular—stands to be enriched both in content and in its potential trajectories of development by the (future) exercise of administrative regulation in the field of artificial intelligence.
5. Regulatory Complexity
The potentially transformative change that may result from public procurement of artificial intelligence is thus affected by the substance of administrative regulation governing the same AI. In addition to the legal framework applicable to public contracts and its regulatory dimension, other sector-specific regulations come into play, rendering the subject matter significantly complex. This depends not only on the challenges of delineating the legally relevant domains, but also on the difficulties in identifying which public authorities are primarily responsible for regulatory intervention. Moreover, this regulatory framework is not confined to the domestic level; rather, it must contend—at least in terms of its practical effects—with the global relevance of the issue.
A now indispensable regulatory benchmark is the already mentioned European Artificial Intelligence Act (AI Act), instituted by Regulation (EU) 2024/1689 (
Cotino Hueso and Galetta 2025). This normative instrument shapes the regulatory landscape of artificial intelligence, addressing not only the substantive characteristics of AI systems and their outcomes but also the procedural framework governing their acquisition and use. The public sector is also subject to these rules and, in fact, some of them—procedural ones included—are explicitly directed at public authorities. It must first be noted that a substantial number of qualitative and operational obligations, including data governance (Article 10), transparency (Article 13), and human oversight (Article 14), introduced by the new AI Act, apply solely or directly to systems categorised as “high-risk” AI. As a result, the legal relevance of many of the European rules on AI is essentially conditional on the concurrent fulfilment of two cumulative requirements: (a) the identification of a (decision-making) system as AI and (b) its classification as “high-risk.”
The definition of AI, mainly present in Article 3 of the AI Act, describes an automated system designed to operate with varying levels of autonomy, which may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, deduces from input how to generate outputs such as predictions, content, recommendations, or decisions capable of influencing physical or virtual environments. Recently, the European Commission issued guidelines to assist in determining whether a system constitutes an AI system within the meaning of the AI Act to facilitate the effective application and enforcement of the Act (
European Commission 2025;
European Law Institute 2024).
The definition of a “high-risk” AI system is more elaborate, structured around both inclusions and exclusions as detailed in Article 6 and Annex III of the AI Act. Aside from the structural notion outlined in the first paragraph of Article 6, wherein a system is deemed “high-risk” if it is a safety component of a product or requires third-party conformity assessment for market placement, Annex III offers more specific guidance. It identifies systems as “high-risk,” inter alia, if they perform biometric evaluations of individuals; are used in managing and operating digital infrastructure and public services (not limited to essential ones); relate to educational and vocational training activities; concern employment and worker management; involve migration, asylum, and border control matters; regard access and enjoyment of essential private and public services or benefits; or pertain to the administration of justice.
These descriptions require specification through the framework of legal reasoning but will necessarily evolve through enforcement. Moreover, the categories listed in Annex III must be coordinated with the derogation mechanisms outlined in Article 6 (3). Pursuant to this provision, a system is not considered “high-risk” if it is intended to (i) carry out a limited procedural task; (ii) improve the outcome of a previously completed human activity; (iii) detect decision-making patterns or deviations therefrom, without making decisions absent appropriate human oversight; or (iv) perform preparatory tasks related to cases covered by Annex III.
The relationship between rules and exceptions in typologically identifying relevant AI systems—and their corresponding legal regimes—is far more complex than this brief overview might suggest. Nevertheless, rather than offering a detailed exegesis of the AI Act in this field, it is worth emphasising that the classification of a given AI system as “high-risk” entails a specific regulatory framework, comprising both substantive and procedural components, that shapes the system’s design and its potential outcomes. The guidelines recently issued by the European Commission in accordance with Article 96 of the AI Act to some extent clarified the practical relevance of high-risk AI systems, including illustrative examples (
European Commission 2025). Yet, in relation to Article 95 of the same Act, the European AI Office and Member States are tasked to encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to all AI systems of some of the mandatory requirements for high-risk AI systems, taking into account technical solutions and best practices. These efforts could help in solving interpretative uncertainties and promote greater consistency in the regulation of AI in both the public and private sectors. At present, however, the potential impact of the exceptions set out in Annex III risks diminishing the operational relevance of the structural features and outcomes the Regulation seeks to mandate for certain types of AI systems. Thus emerges the contingent risk of a self-implementing approach, which brings with it the disadvantages of legal uncertainty and inconsistent practices. The potential reliance on industry standards may also lead to a form of technical capture of public purchasers by more competent and better-equipped private operators.
These considerations highlight the significant challenges in promoting a qualified use of AI through public contracts. This is further reinforced by analysis of the elements mandated by the AI Regulation when a system is classified as “high-risk.” Such systems must meet specific requirements, including risk management, accuracy, robustness, corrective mechanisms, and data governance. Regarding the substantive obligations concerning the operation of high-risk AI systems, the AI Act imposes duties on both producers and users, which, in the case of public procurement, translates into responsibilities shared between public buyers and suppliers. For instance, transparency rules require the supplier to provide a range of information adequate to ensure that the system’s functioning is sufficiently transparent to enable the deployer to interpret its output and use it appropriately. However, public authorities using these systems are also subject to transparency obligations, which, in turn, presuppose the supplier’s cooperation. Similar principles apply to rules concerning “human oversight,” a term used to express the non-exclusivity of automation. Indeed, obligations in the design and development of systems are prescribed to ensure this necessary form of supervision by the individual(s) responsible for their use in a public decision-making context (
Sanchez-Graells 2024b).
Delving deeper, it becomes clear that the aforementioned “rules,” beyond their (hypothetical) relevance for classifying a system as “high-risk,” remain at a high level of abstraction, closely resembling principles rather than rules themselves. Indeed, their clarification will probably emerge through extensive regulatory practice. This, at least at this stage, should imply that awards of AI procurement contracts by public authorities cannot yet offer definitive regulation over the subject.
Since—at least under certain conditions, and where certain criteria are met (foremost among them the classification as “high-risk”)—AI-related public contracts are subject to specific public rules and regulatory obligations, the systemic relevance of regulation is increasing. Yet, there is a peculiarity which lies in the very nature of this development: contractual practices become instruments for regulating the exercise of public functions while also themselves being subject to regulation. This gives rise to an unmatched intersection between contract and regulation, requiring major relevant mechanisms of coordination (
Licata 2024).
One may thus observe the emergence of a complex regulatory process, originating from the AI Act but subsequently layered with diverse legal instruments aimed at fostering the normative development of the sector, including both administrative regulations and contractual practices. Moreover, this regulatory framework—encompassing, inter alia, rules on AI procurement—does not operate in isolation; rather, it intersects with other sector-specific regimes, such as those concerning data protection and cybersecurity. This convergence renders the balancing of interests particularly challenging, as it is embedded within an administrative apparatus that is further complicated by the need to coordinate across multiple levels and objectives of regulatory practice (
Selten and Klievink 2024).
6. Public Procurement of Artificial Intelligence as Infrastructure
The analysis conducted herein highlights a complex dimension that suggests the need for a conceptual redefinition of the system of public contracts for artificial intelligence, which may be conceived as an infrastructure within which transformative change unfolds.
Recent studies, building upon the conceptualisation of infrastructure elaborated in the social sciences (
Star 1999), have attempted to reconstruct a notion of legal infrastructure. This effort has particularly focused on the relationship between the materiality of law and its exercise as a social practice (
Byrne et al. 2024), thereby underscoring a dimension of publicness that is not strictly confined to the notion of public legal status (
Kingsbury and Maisley 2021). Such a perspective proves useful in providing a more accurate representation of the legal system as a whole. It reveals a stability shaped by both internal and external factors, while also emphasising the continuity that emerges through application dynamics involving the plurality of actors engaged in the system, even in the face of change.
More recent legal scholarship has revived this general framework to apply it specifically to public procurement (
Davies and Sanchez-Graells 2025). Accordingly, procurement is not regarded merely as an instrument for the realisation of (material) infrastructures. Rather, it is understood as an infrastructure in its own right, characterised by temporal relevance and by an organisational systematicity that transcends political cycles. Within this perspective, the functional elements and regulatory dimension of public procurement are not displaced. Instead, they are incorporated into a broader theoretical framework, one better suited to analyses that focus directly on the structural role assumed by public contracts: enabling public institutions to operate effectively and to fulfil their functions most appropriately.
The conceptual model that views the public procurement system as infrastructure is particularly useful when applied to the acquisition of artificial intelligence by public authorities. As the foregoing analysis has shown, it is not merely the purchase itself that is significant but, rather, the acquisition of an instrument that enables functions to be exercised in ways that can produce transformative change. Consequently, an infrastructure is constructed along which public functions develop. These infrastructures require adequate design (a matter largely associated with the purchase itself) and ongoing maintenance to ensure optimal utilisation (linked to data management in AI systems, an issue mostly located in the contract performance phase). Furthermore, constant protection is required, both ex ante and ex post, involving not only internal variables, such as organisational ones, but also “external” variables; e.g., those connected with the exercise of regulation in cybersecurity. Thus, the infrastructures derived from the public procurement of artificial intelligence require a complex system of “maintenance” involving several private and public actors. This includes monitoring contract performance, regulatory operations, and any other administrative activities that foster an environment in which public functions driven by artificial intelligence can develop optimally.
Thus, what emerges is a progressively evolving system, the “outcome” of which depends upon the acquisition of professional expertise, the resilience of foundational structures such as technological platforms, inter-institutional cooperation, procedural sedimentation, incremental improvements, and progressive implementation. In this sense, enrichment and development should be evaluated not solely in terms of the individual infrastructure resulting from a single contract utilised for the exercise of one or more functions but rather in a manner that takes the overall development of the system into consideration. From this perspective, as recent studies in public management have pointed out, each result—or even the simple initiation of a specific procedure—may signal the “direction” of the transformative trajectory that is intended to be imparted to the system, thereby opening new avenues of institutional interaction between the public and private sectors (
Edler 2023).
As a method of analysis, it becomes necessary to examine the normative projection that each case is capable of producing. This entails broadening the scope of inquiry to include contractual documents and their modes of production, which are now situated within an inevitably more complex regulatory framework. Instead, as a “policy” orientation, specific attention must be given to normative and regulatory interventions that not only establish but also ensure the ongoing maintenance and progressive strengthening of those infrastructures that make the exercise of public functions possible through artificial intelligence. Only in this way can a stable foundation be assured for the transformative change that they are expected to develop.
Ultimately, it may be asserted that the exercise of public functions through artificial intelligence has the tangible potential to effectively determine transformative change within the State (
Dunleavy and Margetts 2023). This concerns not only the exercise of functions per se but also the modalities via which such exercise occurs. From this standpoint, public procurement of artificial intelligence assumes a central role, establishing the conditions under which such functions can be carried out in ways that are effective and consistent with the fundamental legal principles required for the lawful use of AI systems. Such public contracts present complex theoretical and operational challenges, prompting a conceptual and methodological repositioning of some of the “essentials” of public procurement law. The infrastructural approach thus offers a valuable means for clarifying key issues that increasingly demand attention, beginning with the relational configurations that emerge in the practical implementation at the intersection of law and society.
7. Conclusions
This essay represents an attempt to reconstruct public contracts for artificial intelligence from a theoretical standpoint. Assuming that procurement procedures shape the basis of artificial intelligence systems, public contracts were considered as an essential hub through which to view the transformative change that artificial intelligence brings about in the State.
Taking into account the specific technical and legal characteristics that artificial intelligence must possess for its use to be considered legitimate, the peculiar nature of the purchase leads to the conclusion that public procurement procedures for artificial intelligence systems have substantial regulatory content. From this perspective, some of the most relevant questions in this area were analysed by combining theoretical and practical issues, first of all, distinguishing between the use of artificial intelligence as a tool for carrying out procurement procedures and artificial intelligence as an object of procurement by public entities for exercising public functions.
By analysing the structure of these procedures, multiple critical issues were highlighted. The most relevant issue is the structuring of underlying public interests, because maximising those derived from the use of artificial intelligence cannot always easily be rationalised with the need to confer specific characteristics upon the use of artificial intelligence. In this sense, emphasis has been placed on the importance of choosing more appropriate procurement procedures and, above all, on the relevance of the public contract execution phase, in which the use of artificial intelligence can be constantly verified.
Another critical issue is the relationship between public purchasers and private entities, the latter of which often possess considerable market power and expertise. Although public contracts for artificial intelligence may have regulatory implications, it has been suggested that the administrative regulation of artificial intelligence could improve its outcomes and boost its ends and means, thereby reshaping the complexity of the relationship between the public and private sectors in this area.
However, the regulation of artificial intelligence is extremely complex, beginning with the definitional logic that conditions its exercise, as is evident in the context of the AI Act. Nevertheless, if implemented consistently, it can enhance the enactment of artificial intelligence in terms of both development and safeguards. Therefore, it is necessary to coordinate administrative regulation and the implementation of artificial intelligence procurement procedures within an ongoing dialogue that is both “operational” and “regulatory.”
Finally, the methodological perspective adopted, drawing on recent contributions to legal theory, considers these issues within an infrastructural conception of law and public contracts in particular. This involves a variety of public and private actors who contribute to the “maintenance” of public contracts for artificial intelligence. These contracts must be considered real infrastructure, through which important public functions are carried out and will increasingly be so in the future.