Next Article in Journal
Economic and Social Aspects of the Space Sector Development Based on the Modified Structure–Conduct–Performance Framework
Previous Article in Journal
The Public Health Impact of Foreign Aid Withdrawal by the United States Government and Its Implications for ARVs, Preexposure, and Postexposure Prophylaxis Medications in South Africa and Nigeria
Previous Article in Special Issue
Similarity and Homogeneity of Climate Change in Local Destinations: A Globally Reproducible Approach from Slovakia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence and Public Sector Auditing: Challenges and Opportunities for Supreme Audit Institutions

by
Dolores Genaro-Moya
1,
Antonio Manuel López-Hernández
2,* and
Mariia Godz
3
1
Department of International and Spanish Economics, Faculty of Economics and Business Studies, University of Granada, 18071 Granada, Spain
2
Department of Accounting and Finance, Faculty of Economics and Business Studies, University of Granada, 18071 Granada, Spain
3
Department of Computer Science and Artificial Intelligence, School of Computer and Telecommunication Engineering, University of Granada, 18014 Granada, Spain
*
Author to whom correspondence should be addressed.
World 2025, 6(2), 78; https://doi.org/10.3390/world6020078
Submission received: 27 February 2025 / Revised: 22 May 2025 / Accepted: 26 May 2025 / Published: 1 June 2025
(This article belongs to the Special Issue Data-Driven Strategic Approaches to Public Management)

Abstract

:
The application of artificial intelligence (AI) is growing exponentially in public entities, contributing to the improvement of the design and provision of services, as well as to the internal management and efficiency of public institutions. However, the potential of artificial intelligence systems for the public sector also entails a set of risks related, among other areas, to privacy, confidentiality, security, transparency or bias and discrimination. The Supreme Audit Institutions (SAIs), when auditing public services and policies, must adapt their human and technological resources to this new scenario. This paper analyses the implications of AI penetration in the public sector, as well as the challenges that these technological developments pose to SAIs to improve effectiveness and efficiency in their auditing tasks. This paper presents a conceptual and exploratory analysis, informed by documentary evidence and case illustrations. Given the dynamic evolution of AI research, the findings should be interpreted as a contribution to ongoing debates, rather than definitive conclusions. It also reviews the status of the audits of systems based on algorithms carried out by some SAIs.

1. Introduction

The dynamic and rapidly changing nature of AI research emphasises the importance of updating frameworks and assumptions as new empirical research and technological developments emerge. In this context, the use of AI has a direct impact on the public sector, particularly in its role of oversight and regulation. Mo Ahn [1] notes that it is expected that AI will become present in all public sector organisations despite the high entry barriers stemming from its excessive costs and its effect on long-term efficiency, as the algorithm improves through data processing.
The development of AI will represent, in the coming years, a challenge for the Supreme Audit Institutions (SAIs), the institutions in charge of the independent and external control of the public sector’s economic and financial activity.
SAIs play a crucial role in any democratic state, monitoring the proper use of public funds and overseeing the legality, the accuracy, the efficiency and the effectiveness of public sector operations. The type of control they exercise is external as opposed to internal and/or prior control within the public entities, and therefore, it is independent of the auditee. The results of their work are sent to the legislative power so that they can use them to control the government.
To perform their control functions, they apply audit techniques, methods and standards that will be affected by the introduction of emerging technologies.
In this sense, the introduction of AI is already an important driver of change in audit work, both in the public and the private sectors, so it is expected that there will also be a transformation of the way in which SAIs perform their oversight and control functions. They will most probably have to start dealing with the need to adapt their human and technological resources to an increasingly complex reality. To do this, they should design strategies and make decisions that allow them to prepare for this transformation in the short and long term.
In several countries, a national SAI also has a jurisdictional function whose main objective is to recover the misused public funds and return them to the public treasury. This is the case of Spain, France or Morocco, for example, and this additional function will quite probably be affected by the introduction of AI, but this article will not focus on this field.
Either way, the use of AI in the exercise of the external control of the economic and financial management of the public sector will also present, as in other areas, several opportunities for improving the effectiveness and efficiency of audits.
In this context, the objective of this article is twofold. Given the rapid and ongoing development of AI technologies, this paper adopts a conceptual and exploratory approach. While it draws on existing empirical documentation and case studies, this analysis should be understood as interpretative and illustrative, rather than conclusive. The aim is to foster institutional reflection and inform future research agendas. First, it aims to address a gap in the academic and institutional literature concerning how the increasing use of AI is affecting the external control of public administration, particularly through the lens of Supreme Audit Institutions (SAIs). Although a growing number of studies have examined the implementation of AI in public services, there is a lack of analysis on how external oversight bodies are adapting to this paradigm shift—especially in Spain and other comparable jurisdictions. Second, this article seeks to analyse the opportunities and challenges that the development and application of AI pose for SAIs. To do so, we conduct a qualitative and documentary analysis based on recent reports, audit initiatives and strategic frameworks from several SAIs across different countries. A specific section is dedicated to presenting a diagnosis of the current state of AI-related audit practices, which includes identifying emerging approaches, technological adaptations and institutional barriers. This diagnosis serves as a baseline to inform future developments in the field.
This paper uses a qualitative–documentary approach examining official SAI reports and open-source articles. Our analytical framework focuses on the following: 1. identifying emerging best practices in the application of AI technologies in public sector audits; 2. technical limitations; and 3. institutional capacity. Given the limited scholarly study of AI integration in SAI public sector audits, the chosen approach is justified to identify critical topics and guide future theoretical research.

2. Artificial Intelligence in Public Administration

The introduction of the use of AI in all areas of life is confirmed, at this point in the 21st century, as a great revolution comparable to the first industrial revolution in the 18th century, to the second revolution linked to mass or chain production in the 19th century or to the third technological revolution of the 20th century [2], hence the denomination of the fourth industrial revolution (Industry 4.0). Although it is true that AI arose in the middle of the last century, the large processing capacity of computers and the extension in the daily use of mobile phones and technological objects that make up the IoT (Internet of Things) has provided a large volume of structured and unstructured information that is processed and used for the “feeding” of the algorithms that form the basis of AI. Logically, the use of tools based in AI has spread both in the private and public spheres, although the objective set out in this article addresses exclusively the latter area.
As a preliminary step to the analysis of the implications of AI within the public sector, it is appropriate to provide an overview of this technology, briefly including its conceptualisation and typology, to establish a systematic framework, which will facilitate a better appreciation of its effects on public entities.

2.1. Concept and Types of Artificial Intelligence

AI lacks a universally agreed-upon definition in the academic literature [3]. The term was coined in the 1950s by McCarthy [4] as “the science of making intelligent machines”, related to the task of using them to understand human intelligence, without being limited to biologically observable methods. The term was used in an academic context for the first time to indicate an emerging field of research that studied, on the one hand, the ability of machines to perform tasks showing intelligent behaviour similar to humans and, on the other, the ability of machines to behave as intelligent agents perceiving the environment and to perform actions to achieve some objectives [5]. Subsequently, the definition has been modified to adapt it to the evolving scope of AI over time. For instance, according to Rich (1983) [6] AI is “the study of how to make computers do things at which, at the moment, people are better”, focusing on computers as AI instruments. But AI has also been considered by Patterson (2004) [7] as a “branch of computer science concerned with the study and creation of Computer systems that exhibit some form of intelligence, systems that learn new concepts and tastes, systems that can reason and draw useful conclusion about the world around us, systems that can understand a natural language or perceive and comprehend a visual scene and systems that perform other types of feats that require human types of Intelligence”.
More recently, the OECD [8,9] has defined AI systems as machine-based systems that, for a given set of human-defined objectives, can make predictions, recommendations or decisions that influence real or virtual environments. AI systems are designed to operate with varying degrees of autonomy. In addition, AI comprises machines that perform cognitive functions like those of humans.
Beyond the former definitions, there are diverse ways to classify AI. One of the most accepted approaches is based on their functional capacity, differentiating between applied AI—also known as narrow or weak AI, general AI and strong AI [10,11]. The first type of AI is the one that performs specific tasks, such as speech recognition, medical diagnoses or robot control, but it is not able to generalise to other tasks. Computers perform tasks intelligently in specific areas, in line with different human capabilities [12]. Although weak AI systems have limited applications, their simplicity makes them particularly useful for tasks with several repetitive movements and variables. Within this mode, it is possible to identify diverse types of AI, such as natural language processing, machine learning and artificial vision [13]. General AI refers to the idea that AI, through machines, could match the capabilities of humans [14] and, although it does not yet exist, is a target of AI research. On the other hand, strong AI will surpass the capacity of humans, and machines will perform better and overtake humans. It is, therefore, considered a potential threat to society.
Based on functionality, AI is classified as follows [15]:
  • Reactive Machines: Those AI systems that work on the data available and respond to external stimuli in real-time, without the ability to store data or learn from past experiences. Deep blue (chess-playing supercomputer that beat the chess grandmaster Kasparov in the 90s) or Netflix’s recommendation engine are two examples of this type of AI.
  • Limited Memory: those AI systems that can store and use past experiences to make predictions or decisions, but their memory capacity is limited. Some examples of this type are ChatGPT 4, virtual assistants and chatbots, such as Siri or Alexa and self-driving cars.
  • Theory of Mind: those AI systems that could understand and respond to human psychological and emotional aspects, potentially leading to more natural and intuitive interactions.
  • Self-Awareness: Those AI systems that can think and act. They possess self-consciousness and understanding and will be capable of thinking about their own existence and beliefs.
Although the last two types do not exist so far, scientists are working to develop them.
It is also common to classify AI as symbolic or rule-based and not symbolic. Symbolic AI, also known as “expert systems,” uses symbols and logical rules to represent and manipulate information, describe workflows and produce results. Due to its relative simplicity, this type of AI is more suitable for processes or problems of low complexity, where few actors participate, the actions to be executed are few and the changes are not frequent [11]. Non-symbolic AI, which refers to machine learning or ML, consists of a series of techniques that allow machines to learn and make predictions from historical data, based on the identification of patterns, without the need for the instructions of a human.
In recent years, ML has become the predominant focus in AI development, allowing a machine to learn and improve its actions based on data, without being specifically programmed. ML systems include artificial neural networks, and, within these, the deep learning (DL) systems; all of which represent specific expressions or subsets of narrow and limited memory AI systems [16,17] as abovementioned.

2.2. The Penetration of Artificial Intelligence in the Public Sector: Opportunities and Risks

Although analyses and considerations on the potential of AI have mainly focused on the private sector, public entities have progressively become more prominent in relation to AI, performing a wide range of functions, often simultaneously, such as, for example, as direct funders or investors, purchasers of existing solutions, regulators, conciliators, data administrators or users and service providers [18].
In the development of its functions, the disruptive and innovative power of AI in the public sector will most probably bring opportunities primarily focused on the following areas [19]:
  • The improvement of public administrations’ decision-making, especially in the formulation, implementation and evaluation of public policies: AI systems can capture the interests and concerns of citizens and identify trends and anticipate situations that deserve the attention of public entities to forecast possible results or impacts, increasing the chances of successful interventions [20];
  • The improvement of the design and delivery of more inclusive services to citizens and businesses: The collection and processing of digital data facilitates the improvement of public services, enables interactive engagement with the public and offers guidance or transmits vital information, assisting citizen participation in public sector activities. In the case of infrastructures, the preventive maintenance, fault correction or scheduling of their use according to demand is possible through AI applications, contributing to a more efficient resource utilisation [21];
  • The improvement of the management and internal efficiency of public institutions: AI systems can facilitate the fulfilment of objectives and responsibilities, freeing officials from routine tasks to engage in activities of greater value and complexity. They can also support the allocation and management of financial resources, helping to identify and prevent fraud and diversions or inefficiencies in the allocation and use of public money, among other problems [22]. In this sense, Soylu et al. [23] propose as a case study the government of Slovenia, where through the availability of open public procurement data, anomalies have been detected, with spikes observed during periods of crisis, elections or the recent global COVID-19 pandemic;
  • The reinforcement of democracy and the fight against corruption: AI tools make it possible, through the examination of information and data, to prevent misinformation and cyber-attacks and to achieve greater transparency and competition, for example, in public procurement processes. Thus, Alhazbi [24] studied the phenomenon of trolling in social networks due to its significant impact on the reputation of public institutions and the increase in polarisation. Henrique et al. [25] proposed using control tools based on machine learning and on statistics to detect non-compliance with public contracts which may seriously affect institutions;
  • The enhancement of public safety and security: AI is already being used in crime prevention and the criminal justice system, enabling faster information processing and a more accurate analysis of criminal actions that would reduce judicial procedures, including preventing or predicting terrorist attacks or other criminal actions. AI is also being integrated into the military or national security operations in numerous countries.
When properly planned and managed, digital technology can codify open practices that support accountability, equality and neutrality in public administration. Governments may identify service needs, distribute resources equitably and enhance democratic legitimacy by reducing biases and corruption by utilising real-time data and AI-driven analytics [26].
To overcome the difficulties caused by legacy systems, bureaucratic inertia and inadequate infrastructure, Leocadio [27] highlights the effectiveness of the gradual use of AI in public audit procedures. In order to promote a culture of continuous learning and adaptation, he emphasises the need for the thorough training of auditors both in the technical application of AI tools and in the interpretation of AI-derived insights within the framework of accepted audit standards. The gradual integration methodology clearly improves the accuracy of risk assessment, decision-making and audit results in general, leading to the more efficient management of public resources.
Some of these features are directly related to the functions performed by SAIs, such as reducing inefficiencies, improving public management, combating corruption and fraud or making proposals to improve the design and impact of public policies through evaluation and performance auditing. Therefore, in the next section we will focus on some of these AI tools that are not only useful for the public sector but will also be practical for the external control of public management.
However, the potential advantages of AI systems for the public sector are accompanied by a set of risks that cannot be circumvented, mainly addressing the following aspects:
-
Privacy, confidentiality and security: The collection and processing of large amounts of data in the public sector can pose significant challenges in terms of privacy and security. AI systems may exhibit flaws and vulnerabilities that must be foreseen to prevent unauthorised access, such as attacks that manipulate their ability to learn or act on what they have learnt. It is essential to ensure that citizens’ data are protected and not misused. It is also essential that citizens are informed of their rights, the applicable regulations and how they can make any complaints if they consider it necessary [26,27];
-
Transparency: AI systems, such as deep neural networks, are often difficult to interpret. Algorithmic transparency is the element by which citizens can know how autonomous decision systems make decisions that impact their lives [28]. Much of the processing, storage and use of information is performed by algorithms and in a non-transparent way, within a “black box” of virtually inscrutable processing, whose content is unknown even to its programmers [29]. This raises concerns about the lack of transparency in government decisions, which often have implications in the lives of individuals or groups, making it difficult to account for and understand how certain decisions are made. As noted by Berryhill et al. [11], when algorithms are too complex, the possibility of explaining them can be reinforced by traceability and audit mechanisms, alongside the disclosure of their scope;
-
Algorithmic discrimination, access and equity: AI algorithms can perpetuate biases and discrimination if trained with erroneous or biassed historical data, which reflect the bias or prejudice of the people who collect them, which may affect groups of citizens by gender, race, age or other factors in their access to resources or services, the level of surveillance to which they are exposed and even their ability to be taken into account in an environment that emphasises technologies [30]. Thus, algorithms can reinforce social-bias-generating injustices [31], distributing resources unevenly [32] and reinforcing the technocratic nature of public administration [33].
It is worth noting that, for example, less than one percent of AI-focused graduates chose government positions over academic or private-sector positions in the USA. This talent gap directly impedes the rollout of AI-based programmes from advanced auditing techniques to evidence-based policy analyses, forcing agencies like the IRS to depend on temporary IPA interns for their data science and AI capabilities to evaluate tax audit datasets [28].
As noted by the CAF [22], the assessment of the potential impacts of the use of algorithms, together with the identification of associated risks, is an essential element in determining the necessary actions to ensure compliance with the ethical principles of AI.
In this context, some governments have already conducted work to encourage or require good practices in the use of data and the analysis techniques applied. That is the case of the UK government’s Data Science Ethical Framework or, for the EU members, the EU’s 2018 General Data Protection Regulation and the AI Act that came into force in 2023 which, being the first such legislative proposal in the world, could constitute a global benchmark for regulating AI in other jurisdictions.
The AI Act significantly strengthens the legislation on the development and use of AI and is focused on regulating AI to the extent that it has the capacity to harm society, following a risk-based approach: higher risk, stricter standards. Through rigorous technical standards, the AI Act could also establish Europe as a global leader in trustworthy AI. However, policymakers must actively address the challenges and imbalances for AI providers within the EU [34].
Even multilateral organisations, such as the United Nations (UN), the World Bank and the UNESCO, have concerns about the risks brought by AI systems, and they are also undertaking efforts to create frameworks and guidelines for safe and trustworthy AI. In the same way, important issues are also raised on AI and analysed by the OECD AI Observatory.
At the national level, public and private initiatives aimed at raising public concern and supervising or auditing the use of AI in the public sector can also be found. That is the case of the NESTA (the UK’s innovation agency for social good) or ETICAS (a private foundation engaged in the auditing of AI systems, located in Spain).
Bearing this in mind, the establishment of effective AI governance structures in the public sector is particularly relevant, given the evolving nature of AI, the different levels of maturity of the technology used in the variety of AI systems applied and the high uncertainty on the limits and real effects and results of AI usage [22]. The process to establish a governance framework may include elements such as the following: policies and regulations, which indicate the principles and standards for their development, implementation and use; procedures and mechanisms, which help to ensure effective implementation; and institutions, which facilitate the development and implementation of policies and regulations, as well as collaborative governance structures, based on the participation of different stakeholders in the design of AI systems, in order to expand accountability and greater learning to develop preventive actions against biases, injustices and the surveillance of systems [35].
Additionally, the use of AI will bring structural changes in jobs within the public sector. On the one hand, AI-driven automation has the capacity to replace certain jobs, generating concerns regarding unemployment and the necessity to recycle and update the skills of public employees, training them for new roles.
And on the other hand, as some of the structural barriers to the adoption of AI in the public sector concern the competencies needed, they must be updated and enhanced continuously. According to the European Commission [36], the competences can be classified in three categories as presented in Figure 1.
All this falls within the framework of a long-term human resources development strategy in public services, which considers the classification of jobs, training programmes and the type of profiles and skills demanded [37] in this new scenario.
This competence model could be very useful for SAIs by identifying the types of capabilities needed to address AI-related challenges that will arise for the institutions as they extend the use of AI within the institutions (e.g., technical literacy and competencies, strategic planning and normative alignment).
In any case, SAIs will have to verify, through their audit work, that the introduction of AI in the public sector takes place within an appropriate governance framework, aimed at minimising the risks that may arise, such as the appearance of biases, injustice and inequality or uncertainty in the real effects of the introduction of AI, among others. They will also have to control additional aspects mentioned above, such as policies and regulations, institutions, procedures and mechanisms linked to AI and its use in the public sector.
This will mean, undoubtedly, a great challenge for SAIs worldwide in the coming years, and they should be preparing for it.

3. The External Control of Public Sector Performance in the Face of the Use of Artificial Intelligence

3.1. Key Challenges in AI Adoption

SAIs, the independent institutions in charge of the external control of the financial management and operations of public entities, cannot detached from the rapid advancement of AI in many areas and, especially, given the progress that is taking place in its application in public sector management. It is necessary to recognise the structural transformations that AI is introducing in the management of the public administrations audited and to consider what should be the strategy that these institution must design for the coming years, with the objective to take advantage of the benefits that AI can provide to increase effectiveness and efficiency in auditing tasks and in other areas of the organisation and to face the challenges that, without a doubt, will arise in the not-too-distant future.
Such a strategy should, at least, address two questions:
-
Which AI systems could be used as tools to perform the work in the institution in a more effective and efficient way?
-
How should the institution get ready to address the audit of AI systems employed in the management of public services?
These questions are intended as a forward-looking framework for SAIs that seek to prepare for a future where the auditing of AI systems will most certainly become an integral part of its tasks. We acknowledge that institutional awareness and capacity levels vary significantly across countries and sectors, so the use of case studies of different SAIs may help in designing the future strategy in several control institutions.
In this way, this third section seeks to focus on these considerations and to consider and expose some cases as examples of how these challenges are being addressed in some SAIs. In this sense, the aim is to use case studies as exemplary materials, which are valued in the SAIs and other audit communities, providing use-value for other researchers and thus acting as crystallizers, as stated by Morgan [38].

3.2. Institutional Strategies

The application of AI tools within SAIs is not merely a technical development, it is part of a broader institutional adaptation to the digital transformation of their auditees across the public sector. This part of this work explores how the integration of these tools aligns with the evolving role of SAIs and supports their ability to audit increasingly complex and data-driven environments.
As in other areas, the application of AI in audit work presents great opportunities for improvements in the effectiveness of internal and external control, as well as in the productivity of the work performed. In reality, this represents a step further in the integration of advanced technology as an essential instrument for the auditor, although in this case with some differentiating elements that add greater complexity, although, probably, delivering superior medium- and long-term returns.
In fact, so far, technology has provided auditors with essential tools to increase the quality and productivity of their work, ranging from a simple spreadsheet or word processor to more complex, electronic audit management applications. More recently, the incorporation of big data analyses has spread throughout all phases of the audit cycle, allowing for broader and deeper work in each stage.
In this way, during the planning phase, the data analysis allows the detection of risk areas, the selection of topics relevant to the audit, the identification of the databases to work with and if they meet the requirements or must be treated to be used in the development of the audit.
Subsequently, in the implementation phase, the data can be used to build models that allow for the obtainment of evidence and errors, as well as cross-checking millions of data points to obtain audit results and conclusions, without the need to use samples, where the availability of such data allows.
In the field of external control, the Moscow Declaration of the INTOSAI, which formalised the main conclusions of the XXIII INCOSAI held in 2019, highlights the commitment of SAIs to respond effectively to the opportunities generated by technological advances. In this regard, recognising that data analysis is a necessary innovation within an SAI, the declaration commits institutions to “promote the principle of availability and openness of data, source code and algorithms” and to “aspire to make better use of data analysis in audits, including adaptation strategies, such as planning the audits, developing teams experienced in data analysis and introducing new techniques into the practice of public auditing”. In addition, with the aim of reinforcing the impact of SAIs on society, the declaration encourages the training of future auditors of the future so that they are “able to use data analysis, AI tools and advanced qualitative methods, to reinforce innovation...”
The availability and quality of data, therefore, are strategic elements in any control institution, since these can form the basis for carrying out risk analyses and for detecting cases of fraud or management anomalies, which allow them to be more effective and more efficient in auditing, but can also help to promote the accountability and transparency of the public administration, along with a better adherence to the principles of good management.
In addition to the direct impact of the internal use of data—saving time and resources and eliminating errors—the exchange of data between SAIs and other institutions can also be important, which would favour the development of a collaboration that stimulates innovation and the design of common solutions and tools.
At this time, having quality data is essential to be able to use AI tools applied to auditing, a trend that will grow exponentially in the coming years. AI models use large, structured and unstructured databases to obtain accurate and reliable results. But if the data used are not of high quality, the results of the models could be wrong or biassed. Hence, it is important to have datasets that meet a series of required characteristics.
In this context, the Overview of Big Data Audits carried out by SAIs between 2016 and 2021 [39], prepared by the INTOSAI’s Big Data Working Group, states that “the quality and security of audit data are vital to ensure the normal functioning of the big data audit. Improving the quality and security of data can help to broaden the scope of the audit, improve the quality of the analysis and avoid internal control problems” and considers two additional conditions relevant to the use of big data analysis techniques in SAIs:
-
Institutional arrangements, such as regulations or decisions in public administration, are necessary to promote data sharing, ensure data openness and improve the value derived from data utilisation;
-
The construction of an audit platform in the SAI that incorporates the entire process of data collection, preparation, storage, analysis and presentation enables the integrated management of auditors, audit procedures and technical means in a single system.
Undoubtedly, the ability of AI to efficiently process large amounts of data has not only increased the value of the data collected during the audit but has also reduced the cost and processing time of this big data [40].
In fact, the AI systems most frequently used in the field of auditing are based on ML techniques, that is, as stated above, learning algorithms whose objective is to obtain a result that depends on the input variables of the models (data), especially the supervised ML. Table 1, adapted from Rivera [41], lists some of the most used ML tools. This is basically a comparative summary of the main ML tools relevant to external audit functions and their potential applications in SAIs, with adjustments to contextualise the potential of each tool specifically within the external control functions of SAIs. By aligning algorithmic capabilities with real-world audit scenarios, it relates directly to the practical examples shown in this section, in which SAIs use these technologies to identify areas of risk, automate repetitive tasks and improve the effectiveness and efficiency of their controls.
In any case, ML systems need to be trained with large amounts of structured and unstructured data, which meet a series of previously defined requirements, with which the patterns previously defined by the auditor must be found, so the practical application of these systems must be carried out for the specific cases, adapting to the purpose pursued and the information available. In return, they allow the performance of repetitive tasks, thus replacing human intervention, although they may require (at least at the beginning of their application) some monitoring of the results obtained.
In this way, by identifying anomalies and trends that lead to the detection of risk areas, AI streamlines the work of audit teams, which can be more productive and devote more time to tasks in which their experience and knowledge contribute more value to the final work, eliminating the most repetitive tasks and reducing the margin of error in the results.
Also, the application of Robotic Process Automation or RPA to audit tasks that are usually repetitive is interesting, so the automation of these would involve conducting a greater number of checks in less time, thus gaining efficiency. RPA can identify inconsistencies or outliers in the information submitted during the audit.
At this point of the work, it could be relevant to analyse how some SAIs have, already, incorporated AI tools into their control activities, so their experience constitutes a valuable reference for other institutions which will, in the very near future, be required to adopt these technologies, if they are not already using them.
Thus, several SAIs are starting to work more intensively with large amounts of data, sometimes by having a data management centre or a specific office within the organisation in charge of the data analysis and techniques.
That is the case of an SAI in the USA that, since 2019, has had an innovation laboratory to explore and experiment with data science techniques and emerging technologies “https://www.gao.gov/” (accessed on 8 February 2025). Similarly, an SAI in India that has a Data Management and Analysis Centre (CDMA) for all activities related to the data analysis, providing guidance on the data analysis for selected audit tasks, providing the necessary training to officials and working on the strategy to identify the future scope of the data analysis in the institution. The Centre regularly collects data from ICT systems across the country to analyse them and identify potential risk areas to inform future audits “https://cag.gov.in/en/page-cdma” (accessed on 8 February 2025).
Some SAIs are already working with robots, like the Federal Court of Accounts of Brazil, which, since 2017, has worked with a robot named Alice that performs an analysis of public tenders and procurement, with the aim of expediting the collection of data for the identification of irregularities in public tenders, thus carrying out the heavy and repetitive work. It also has other tools such as Labcor, which focuses on intelligence and anti-corruption actions, and SAO, a tool that automatically analyses public works budgets. Another interesting initiative is Marina (Risk Map in Public Procurement), which helps to prevent the corruption associated with public contracts through the supervision of the risk linked to them, based on indicative signs of deficiencies or weaknesses in the tenders or the winning company.
AI is also used in the automatization of certain audit tasks by an SAI in the Philippines, which has developed the MIKA-EL SAI AI Platform that can automatically identify anomalous or unusual transactions among millions of operations from data collected from audited bodies that are cross-checked with data from other administrative levels (https://www.coa.gov.ph/coa-explores-the-use-of-ai-to-detect-statistical-anomalies/, accessed on 8 February 2025). The SAI in Spain has automated the recognition of invoices related to electoral processes for the subsequent integration of their results in the databases of the institution, allowing automated exploitation and the significant acceleration of the analysis and control of the electoral accounting.
Finally, the SAI of Nepal uses AI techniques for various audit tasks [42]. For example, it uses optical character recognition (artificial neural networks) to automate the extraction of certain fixed information based on document fields in various formats and will employ algorithms that reconcile revenue collection data to help draw audit conclusions. They also use RPA (Robotic Process Automation) to identify inconsistencies and outliers that human auditors can address later.
The cases analysed only represent some examples of how SAIs are starting to introduce the use of AI tools with the aim of automating tasks or detecting risk areas, among other benefits. Most of the cases shown reveal that the most widely used AI tools are limited memory and narrow AI types, which are based on the use of ML, mainly to automate checks and analyses and to automatically extract information from various sources. The examples presented also reveal considerable heterogeneity in the readiness of SAIs to audit AI systems, reflecting diverse institutional, legal and technical capacities. While some institutions, such as those of Brazil, the Philippines or the United States, have taken proactive steps to integrate AI tools into their audit processes, others remain in exploratory phases.
In this sense, the study of the state of play in this area is essential given the rapid advancement of emerging technologies to try to learn from the experiences of other control institutions to speed up the process of the adaptation and usage of AI within SAIs. Across the board, usual challenges include technical capacity gaps, data governance limitations and the absence of standardised audit frameworks for algorithmic systems. These findings suggest the need for coordinated international efforts, knowledge-sharing mechanisms and targeted capacity building to bridge the divide.

3.3. Illustrative Uses of AI by SAIs

As the development and implementation of AI systems by public administrations grow exponentially due to their potential to improve public services and reduce costs, among other benefits, new challenges and risks have also emerged, such as biases leading to discrimination or unequal treatment, the need to ensure data security and privacy or incorrect decision-making based on automated processes, among others. AI offers unprecedented opportunities to analyse vast quantities of data with speed and accuracy, but it also introduces new complexities in terms of governance, bias and ethical use.
Therefore, SAIs must adapt their organisations to audit algorithm-based applications within operational or compliance audits in order to remain effective guardians of public accountability.
In this regard, Garde Roca [43] highlights that such audits should adopt a very pragmatic approach, focusing on items that are essential, that analyses and evaluates the algorithms to verify how they work and if they are fulfilling their stated objectives or producing biassed results and generating new social vulnerabilities outside the existing legal and regulatory framework in each case.
However, the audits carried out by SAIs must also address specific aspects of the contribution of AI systems to improve the efficiency and effectiveness of public management to prevent the corruption and the misuse of public funds and to increase transparency and accountability within the public sector.
In any case, the audit of the algorithms on which the tools used in the field of public management are based is an area not yet explored by SAIs due to various factors that hinder their practical application in the field of external control. To facilitate this task, the first version of the Auditing Machine Learning Algorithms guide was published in 2020—A White Paper for Public Auditors “https://www.auditingalgorithms.net/” (accessed on 8 February 2025)—and was prepared by the SAIs of Finland, Germany, the Netherlands, Norway and the United Kingdom, which aims to “help SAIs and individual auditors in carrying out audits of Machine learning algorithms applied by public bodies”. The guide is specifically designed for auditors with some knowledge of quantitative methods but does not assume the expert knowledge of machine learning models. It outlines a series of questions that auditors can use when auditing ML models, which can also serve public managers and guide them during the design of the models to understand what aspects will affect the audit. Some of the problem areas that are identified focus on the possibility of disregarding certain requirements in ML models, such as fairness or transparency, by over-focusing on performance; the poor communication of model requirements between managers and developers, resulting in opposite outcomes in terms of lower performances and higher costs; and the adoption of a model that cannot be maintained or that cannot comply with the regulations in the medium term due to the dependence on external developers.
From the auditor’s perspective, the guide identifies certain implications for auditors stemming from the audit of AI applications:
-
The need for a good understanding of the high-level principles of ML models.
-
The need to understand the most common coding languages and model implementations and be able to use the right software tools.
-
The need for a computer infrastructure to support machine learning with a high computing power, which often involves cloud-based solutions.
-
The need to have a basic understanding of cloud services to properly perform audit work.
In addition to these needs, which may represent potential barriers when auditing ML models, other difficulties in this area can be identified:
-
Little previous knowledge: the application of AI in public management is relatively recent and is not as widespread so that both management and auditing experience is scarce.
-
Low skills of auditors: As it is a new and complex field of knowledge, it will be difficult to have staff with sufficient training to deal with this type of audit, so this should be one of the priorities in the control institution. This type of audit will, in any case, be closely linked to the audit of information systems, so the presence of specialist teams in this area will facilitate the transition.
-
Few guides and manuals: precisely the novelty in auditing AI models entails the difficulty of finding guides or manuals that address this type of audit.
More recently, Kosiyama et al. [44] have published a research paper on algorithm auditing, focusing on legal, ethical and technological risks of AI systems. Thus, the authors identify five key risk variables that must be assessed in an algorithm audit: bias, efficacy, robustness, privacy and explainability.
Nevertheless, it is obvious that this field of knowledge should not remain foreign to the control and oversight function, and therefore some SAIs are already taking the first steps to implement it, although not in all cases through audits.
In some SAI’s the first published reports focus on the government’s strategy or framework in the development of AI systems. That is the case of the European Court of Auditors and the UK’s SAI as stated below.
In May 2024, the European Court of Auditors published a special report [45] to assess the effectiveness of the Commission’s contribution to the development of the EU’s AI ecosystem. Through the examination of several of the Commission’s coordination and implementation actions related to AI and to the adoption of a common legal framework in this field, the Court concluded that the measures adopted by the Commission and at a national level “were not effectively coordinated due to the few governance tools available, their partial implementation, and outdated targets. Furthermore, EU AI investment did not keep pace with global leaders. The implementation of infrastructure and capital support for SMEs to embrace AI technologies took time, and so did not yield significant results by the time of the audit. The Commission succeeded in increasing the volume of EU funded research projects in the AI field, but did not monitor their contribution to the development of the EU AI ecosystem. The Commission’s efforts to ensure that research results translated into innovation were partially effective”.
The UK SAI has produced two reports on the application of AI in British administration. The first one, published in 2021, focused on “Challenges in using data across government in the administration” [46] and gathered the conclusions reflected in several of its reports on the importance of evidence-based decision-making at all levels of government activity and the problems that arise when data are inadequate. The report identified three areas where the British government needed to establish preconditions for the successful use of data for decision-making and public service delivery:
-
Have a clear strategy and leadership to improve the use of data;
-
Have a coherent infrastructure for data management (emphasis on aspects such as data quality or the interoperability of tools);
-
Have broader conditions (e.g., legal, training and security) to safeguard and support the better use of data.
The report concluded that the UK government had not exercised sustainable strategic leadership over the data and their use after years of efforts and failures and that the early evidence that the situation was improving could be another missed opportunity without a clear data strategy and leadership.
More recently, in March 2024, the SAI of the UK published a report on the “Use of AI in government”, which considers the effectiveness of the government in maximising the opportunities and mitigating the risks of AI in providing public services by looking into the British government’s strategy and governance for AI use in public services. The report also examined how government bodies were using AI and their understanding of the opportunities and central government plans for supporting the testing, piloting and scaling of AI and the progress in addressing barriers to AI adoption. Conclusions refer to the need for implementing and adopting AI at scale across the public sector in order to maximise the opportunities of AI and state that the “development and deployment of AI in government bodies is at an early stage and there is activity underway to develop strategies, plans and governance” [47].
Nonetheless, some SAIs are starting to audit algorithms and specific AI systems used in public services.
An SAI in the Netherlands in 2021 published the “Understanding Algorithms” report, which examines whether the government has avoided biases by using algorithms and whether the consequences for citizens and businesses affected by public policies were monitored. The audit work concluded that while the government recognises the importance of privacy, it takes little account of ethical aspects and pointed out that algorithms are not without risk, since the incorrect or biassed use of a database can have a discriminatory impact. Moreover, the algorithms employed by the government were relatively simple, specially designed to automate decisions and were not based on ML. The audit also led to the conclusion that algorithms always had the involvement of people in the learning process, although that limited the benefits that can be gained from the use of AI in the public sphere. Auditors emphasise the importance of paying attention to public concerns and doubts about algorithms, so citizens must be able to understand the use and operation of algorithms. On the other hand, as the government’s use of algorithms can become dependent on external suppliers (proprietary rights and personal data processing), they must ensure that the data security is adequate to prevent sabotage, espionage and criminality.
The recommendations contained in the report are directed towards a clear and consistent definition of the algorithms used by the administration and their quality requirements, as well as their publication and the involvement of citizens in the knowledge of algorithms. Additionally, the administration was invited to use the audit framework, which has been developed and used by the SAI, to develop new algorithms and address aspects such as governance and accountability, model and data confidentiality, the quality of technology controls and ethics.
The SAI of the USA has published, among others, three interesting reports regarding the use of ML models in the USA’s public administration:
-
Artificial Intelligence in Health Care: Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics analyses the current ML medical diagnostic technologies—those that are being used for five selected diseases, emerging ones, challenges affecting their development and adoption and policy options to help address these challenges;
-
Department Of Defence Needs Department-Wide Guidance to Inform Acquisitions, in which they examine the key factors that the 13 selected private companies claim to take into account when acquiring AI capabilities and to what extent the Ministry of Defence has guidelines for the acquisition of AI across the department and how these guidelines reflect, if at all, the key factors identified by private sector companies;
-
Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements. This report analyses the application of AI in major federal agencies, focusing on current and anticipated uses of AI reported by federal agencies, the degree of completeness and accuracy of AI reports and the degree of compliance with certain federal AI policies and guidelines.
The SAI of the USA has also published other types of documents related to this subject as a result, without a doubt, of the consideration of AI auditing as a priority area in which to generate experience and knowledge both inside and outside of the SAI.
Therefore, the first steps are already being taken in the conduct of audits of AI systems and their application to uses in the public sector, this being an area in which there is a long way to go, so sharing knowledge and experience among SAIs seems to be an essential element to move forward at a good pace.
Following what has been stated so far, it seems unavoidable to think that SAIs will inevitably encounter some challenges in the medium term because of the use of AI systems by control institutions and by the public sector they oversee. In this sense, according to a survey conducted by the European Court of Auditors [48], the challenges that the introduction of AI will bring to SAIs are the following, as pointed out by most of the European Union SAIs:
Insufficient technical skills;
Compliance with legal, ethical, data protection and contractual obligations;
The multidisciplinary complexity of the subject;
A lack of business analysis skills in the institution;
Budgetary constraints.
In response to these challenges, it is essential to have an institutional strategy that addresses all the challenges mentioned above and that undoubtedly must include some of the following elements:
  • The awareness and commitment of the governing bodies to implement the necessary initiatives and measures, providing the necessary funding and staff devoted to the implementation process and development;
  • The design of a data policy or strategy within the institution should ensure the availability of consistent and reliable information to be able to perform the analysis and the protection of the data, which minimises potential vulnerabilities, granting the necessary relevance to the cybersecurity measures adopted;
  • Fostering the training of staff to achieve the adaptation of the auditor to the technological environment and promoting the creation of multidisciplinary teams. The training and capacity-building programmes developed should consider the use of already defined competence models, such as the one mentioned in Figure 1, that, according to the European Commission, respond to the competences needed or used by individuals engaging with AI in the public sector. They should include technical aspects of AI, especially the knowledge about the handling and processing of data that should be part of the qualification of the auditor and ICT staff of any SAI since the examination of the data used by the public administration and its quality is becoming an essential part of audit work. The deeper knowledge of ethical considerations, potential biases and relevant legislation must also be considered relevant;
  • The recruitment of experts in data analytics and AI models but also those with legal and technical training that qualify them to audit the algorithms applied in public administration AI systems and to develop internal protocols to identify and mitigate biases in an SAI’s AI systems. The latter will involve regular testing before and after deploying AI tools to ensure the fairness and transparency of the SAI oversight operations;
  • Inter-institutional cooperation and collaboration with other control bodies, promoting the secure interconnection of their internal networks to share information that can help every control institution in the tasks of the internal and external control of public funds and operations.

4. Conclusions

Unlike broader studies that focus on AI in public administration, this paper brings a specific institutional lens by analysing how SAIs, the independent institutions in charge of the control of the economic management and operations of public entities, are engaging with AI as both users and auditors. The comparative diagnosis offers a structured understanding of current responses, highlighting strategic gaps and emerging good practices. This contributes to the development of a common knowledge base that can support institutional learning and policymaking in external audit institutions worldwide.
The analysis conducted highlights several relevant issues that should not be ignored by SAIs as they witness the increase in the use of AI and data in public administrations. Although the progress of AI in the public domain is helping to improve decision-making and the design and delivery of services, as well as the internal management and efficiency of public entities and services, it poses new challenges to oversight and control institutions. In this sense, while supervising and auditing the compliance and performance of public services and entities, SAIs will have to address emerging risks such as the lack of privacy, confidentiality and security of citizens’ data; the need to achieve transparency on how autonomous decision systems based in algorithms make decisions that impact their lives; and the need to ensure the absence of algorithmic discrimination or biases.
Secondly, the use of AI systems is expected to foster effectiveness and efficiency in SAIs’ auditing procedures when they know how to take advantage of the diverse types of AI tools that could be useful in their field of activity. To achieve this, they must prepare and, above all, be aware of the importance of having available and quality data and, therefore, acknowledge the need to assess that quality, both in terms of the public sector data and the data of the control institution itself. Some SAIs, such as the SAI of the USA and the SAI of India, are already working on tools that enable them to use large volumes of data to carry out comparisons and analyses that reveal risks or situations that were difficult to discover in the past—tools that can eliminate blind spots in auditing and improve the ability to prevent and solve the main risks related to different areas of governance. The SAI of Nepal uses neural networks to automate the extraction of certain fixed information from documents, while most of the SAI cases analysed are already using robots and other ML-based test and control automation systems.
Thirdly, the need to audit AI systems used in the management of public services is increasingly a reality that will have to be addressed in all SAIs to safeguard trust and accountability in the digital age. Although a few steps have been taken, some SAIs have published reports on supporting the development of AI in Europe (the European Court of Auditors), the use of data and AI systems across government (UK SAI), the use of ML models in public administration (USA SAI) and on the algorithm-based tools used by the government (The Netherlands SAI). Although progress is being made slowly in this area at the moment, the role of SAI audits seems very reevant in this field, not only to guarantee compliance, effectiveness and efficiency in the application of AI systems, but also to contribute, through auditing, to the development of regulations and public policies aimed at promoting the accountability, transparency and fairness of algorithms.
Finally, limitations of this work have arisen due to the scarcity of previous analyses conducted in the field of AI systems and auditing practices in the control of the economic and financial management of public funds and services. It is important to reiterate that this paper is based on a conceptual and qualitative analysis of the available documentation. Given the evolving nature of both AI systems and public sector auditing, some of the assumptions and perspectives presented here may require further empirical validation. Future research is needed to test these observations in practice and adapt them to new regulatory and technological developments. The lack of primary empirical data and the need to rely on publicly available documents have also been a drawback along with the analysis in such a novel field of study. The findings and reflections presented in this paper should be interpreted within the conceptual nature of this study. It is important to acknowledge that AI technology and government auditing practices evolve over time, so the assumptions and perspectives discussed in this paper may require further empirical validations or adjustments.
However, all this leads us to suggest some proposals for future research work to make further progress in the use of some empirical methodologies, such as case-based fieldwork, SAI broad surveys and stakeholder interviews, all carried out in the context of the use of AI systems and the conduct of algorithmic auditing in SAIs. The result of the future research in this area would hopefully lead to designing an AI strategy, not only within SAIs, but also within any institution, or even company, in charge of controlling and supervising public management. It is also expected that the future research could identify difficulties and impediments that SAIs may encounter in the use of AI and suggest technical and organisational solutions that may help to overcome them. Another relevant field of analysis could also focus on the potential use of AI systems in the jurisdictional area for those SAIs that exercise this additional function.

Author Contributions

Conceptualization, A.M.L.-H. and D.G.-M.; methodology, A.M.L.-H. and D.G.-M.; formal analysis, A.M.L.-H. and D.G.-M.; investigation, A.M.L.-H., D.G.-M. and M.G.; resources, M.G.; writing—original draft preparation, A.M.L.-H. and D.G.-M.; writing—review and editing, M.G.; visualization, M.G.; supervision, A.M.L.-H.; funding acquisition, A.M.L.-H. and M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministerio de Ciencia e Innovacion, grant number PID2021-128713OB-I00, MCIN/AEI/10.13039/501100011033/y por FEDER una Manera de Hacer Europa.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

Mariia Godz acknowledges the support of the scientific project with the reference number PID2023-146575NB-I00 funded by MICIU/AEl/10.13039/501100011033 and by “ESF+”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mo Ahn, J. Artificial Intelligence in Public Administration: New Opportunities and Threats. Korean J. Public Adm. 2021, 30, 1–33. [Google Scholar]
  2. Asociación Española de Contabilidad y Administración de Empresas (AECA). La transformación digital del sector público en la era del gobierno. In Documento de las Comisiones Nuevas Tecnologías y Contabilidad (nº 18) y Contabilidad y Administración del Sector Público (nº 16). 2022. Available online: https://aeca.es/publicaciones2/documentos/nuevas-tecnologias-y-contabilidad-documentos-aeca/nt18_ps16/ (accessed on 7 February 2025).
  3. Wang, P. On Defining Artificial Intelligence. J. Artif. Gen. Intell. 2019, 10, 1–37. [Google Scholar] [CrossRef]
  4. McCarthy, J. Why Is Artificial Intelligence? 2004. Available online: https://borghese.di.unimi.it/Teaching/AdvancedIntelligentSystems/Old/IntelligentSystems_2008_2009/Old/IntelligentSystems_2005_2006/Documents/Symbolic/04_McCarthy_whatisai.pdf (accessed on 2 February 2025).
  5. Tzafestas, S.G. Roboethics. A Navigating Overview; Springer: Cham, Switzerland, 2016. [Google Scholar]
  6. Rich, E. Artificial Intelligence; McGraw-Hill: New York, NY, USA, 1983; 411p. [Google Scholar]
  7. Patterson, D.W. Introduction to Artificial Intelligence and Expert Systems; Prentice Hall of India: Hoboken, NJ, USA, 1990. [Google Scholar]
  8. Organisation for Economic Cooperation and Development-OECD Artificial Intelligence in Society. 2019. Available online: www.oecd.org/going-digital/artificialintelligencein-society-eedfee77-en.htm (accessed on 7 February 2025).
  9. Organisation for Economic Cooperation and Development-OECD Recommendation of the Council on Artificial Intelligence. 2019. Available online: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (accessed on 7 February 2025).
  10. Fjelland, R. Why general artificial intelligence will not be realized. Humanit. Soc. Sci. Commun. 2020, 7, 10. [Google Scholar] [CrossRef]
  11. Berryhill, J.; Heang, K.K.; Clogher, R.; McBride, K. Hello, World: Artificial Intelligence and Its Use in the Public Sector. OECD Working Papers on Public Governance No. 36. 2019. Available online: https://www.ospi.es/export/sites/ospi/documents/documentos/Tecnologias-habilitantes/IA-Public-Sector.pdf (accessed on 7 February 2025).
  12. Frank, M.R.; Wang, D.; Cebrian, M.; Rahwan, I. The evolution of citation graphs in artificial intelligence research. Nat. Mach. Intell. 2019, 1, 79–85. [Google Scholar] [CrossRef]
  13. Reyes Alva, W.A.; Recuenco Cabrera, A.D. Artificial intelligence: Road to a new schematic of the world. Sciéndo 2020, 23, 299–308. [Google Scholar]
  14. Bostrom, N. Superintelligence; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  15. Ghosh, M.; Thirugnanam, A. Introduction to Artificial Intelligence. In Artificial Intelligence for Information Management: A Healthcare Perspective. Studies in Big Data; Srinivasa, K.G., Siddesh, G.M., Sekhar, S.R.M., Eds.; Springer: Singapore, 2021; Volume 88, pp. 23–44. [Google Scholar]
  16. Xu, C.E.; Xu, L.S.; Lu, Y.Y.; Xu, H.; Zhu, Z.L. E-government recommendation algorithm based on probabilistic semantic cluster analysis in combination of improved collaborative filtering in big-data environment of government affairs. Pers. Ubiquitous Comput. 2019, 23, 475–485. [Google Scholar] [CrossRef]
  17. Guillem, F. Funciones y Características de la Inteligencia Artificial. Seguritecnia: Revista decana independiente de seguridad. Seguritecnia 2022, 493, 174–181. [Google Scholar]
  18. Ubaldi, B.; Le Febre, E.M.; Petrucci, E.; Marchionni, P.; Biancalana, C.; Hiltunen, N.; Intravaia, D.M.; Yang, C. State of the Art in the Use of Emerging Technologies in the Public Sector. OECD Working Papers on Public Governance No. 31. 2019. Available online: https://www.sipotra.it/wp-content/uploads/2019/09/State-of-the-art-in-the-use-of-emerging-technologies-in-the-public-sector.pdf (accessed on 7 February 2025).
  19. Samoili, S.; Lopez, C.M.; Gomez Gutierrez, E.; De Prato, G.; Martinez-Plumed, F.; Delipetrev, B. Defining Artificial Intelligence. Towards An Operational Definition and Taxonomy of Artificial Intelligence (JRC118163) [EUR-Scientific and Technical Research Reports]. Publications Office of the European Union. 2020. Available online: https://publications.jrc.ec.europa.eu/repository/handle/111111111/59452 (accessed on 7 February 2025).
  20. Valle-Cruz, D.; Criado, I.; Sandoval-Almazán, R.; Ruvalcaba-Gómez, E.A. Assessing the public policy cycle framework in the age of artificial intelligence. From agenda-setting to policy evaluation. Gov. Inf. Q. 2020, 37, 101509. [Google Scholar] [CrossRef]
  21. Van Ooijen, C.; Ubaldi, B.; Welby, B. A Data-Driven Public Sector: Enabling the Strategic Use of Data for Productive, Inclusive and Trusted Governance. OECD Working Papers on Public Governance No. 33. 2019. Available online: https://www.oecd.org/en/publications/a-data-driven-public-sector_09ab162c-en.html (accessed on 7 February 2025).
  22. CAF-Banco de Desarrollo de América Latina. Conceptos Fundamentales y Uso Responsable de la Inteligencia Artificial en el Sector Público. Informe 2. Corporación Andina de Fomento. 2022. Available online: https://scioteca.caf.com/handle/123456789/1921 (accessed on 7 February 2025).
  23. Soylu, A.; Corcho, O.; Elvesaeter, B.; Badenes-Olmedo, C.; Yedro-Martinez, F.; Kovacic, M.; Roman, D. Data Quality Barriers for Transparency in Public Procurement. Information 2022, 13, 99. [Google Scholar] [CrossRef]
  24. Alhazbi, S. Behavior-Based Machine Learning Approaches to Identify State-Sponsored Trolls on Twitter. IEEE Access 2020, 8, 195132–195141. [Google Scholar] [CrossRef]
  25. Henrique, B.M.; Sobreiro, V.A.; Kimura, H. Contracting in Brazilian public administration: A machine learning approach. Expert Syst. 2020, 37, e12550. [Google Scholar] [CrossRef]
  26. Newman, J.; Mintrom, M.; O’Neill, D. Digital technologies, artificial intelligence, and bureaucratic transformation. Futures 2022, 136, 102886. [Google Scholar] [CrossRef]
  27. Leocádio, D.; Malheiro, L.; Reis, J. Exploration of Audit Technologies in Public Security Agencies: Empirical Research from Portugal. J. Risk Financ. Manag. 2025, 18, 51. [Google Scholar] [CrossRef]
  28. Cui, I.; Ho, D.E.; Martin, O.; O’Connell, A.J. Governing by Assignment. SSRN Electron. J. 2024, 173, 157. [Google Scholar] [CrossRef]
  29. Criado, J.I. Inteligencia Artificial (y Administración Pública). Econ. Rev. Cult. Leg. 2021, 20, 348–372. [Google Scholar] [CrossRef]
  30. Diakopoulos, N. Accountability in Algorithmic Decision Making. Commun. ACM 2016, 59, 56–62. [Google Scholar] [CrossRef]
  31. Stone, P.; Brooks, R.; Brynjolfsson, E. 2016 Report, One-Hundred-Year Study on Artificial Intelligence. AI100. 2016. Available online: https://ai100.stanford.edu/2016-report (accessed on 7 February 2025).
  32. DeSouza, K. Delivering Artificial Intelligence in Government: Challenges and Opportunities. IBM Center for Business of Government. 2018. Available online: http://www.businessofgovernment.org/sites/default/files/Delivering%20Artificial%20Intelligence%20in%20Government.pdf (accessed on 7 February 2025).
  33. Brookfield Institute. Introduction to AI for Policymakers: Understanding the Shift. 2018. Available online: https://brookfieldinstitute.ca/intro-to-ai-for-policymakers (accessed on 7 February 2025).
  34. Kilian, R.; Jäck, L.; Ebel, D. European AI Standards-Technical Standardization and Implementation Challenges Under the EU AI Act (February 26, 2025). Available online: https://ssrn.com/abstract=5155591 (accessed on 7 March 2025).
  35. Noble, S.U. Algorithms of Oppression: How Search Engines Reinforce Racism; New York University Press: New York, NY, USA, 2018. [Google Scholar]
  36. European Commission; Joint Research Centre; Medaglia, R.; Mikalef, P.; Tangi, L. Competences and Governance Practices for Artificial Intelligence in the Public Sector; JRC138702; Publications Office of the European Union: Luxembourg, 2024; Available online: https://op.europa.eu/en/publication-detail/-/publication/949913fa-aae4-11ef-acb1-01aa75ed71a1/language-en (accessed on 7 February 2025).
  37. Filgueiras, F. Inteligencia artificial en la administración pública: Ambigüedad y elección de sistemas de IA y desafíos de gobernanza digital. Rev. CLAD Reforma Democr. 2021, 79, 5–38. [Google Scholar] [CrossRef]
  38. Morgan, M.S. Exemplification and the use-values of cases and case studies. Stud. Hist. Philos. Sci. Part A 2019, 78, 5–13. [Google Scholar] [CrossRef]
  39. INTOSAI Development Initiative. Research Paper on Innovative Audit Technology; INTOSAI Working Group on Big Data: Beijing, China, 2022. [Google Scholar]
  40. Amsler, L.B.; Martinez, J.K.; Smith, S.E. Dispute System Design: Preventing, Managing, and Resolving Conflict; Stanford University Press: Stanford, CA, USA, 2020. [Google Scholar]
  41. Rivera, T. Application of machine learning in SAIs. Int. J. Gov. Audit. 2023, 50, 13–17. Available online: https://intosaijournal.org/es/journal-entry/machine-learning-application-for-sais/ (accessed on 8 February 2025).
  42. Prasad Dotel, R. Artificial Intelligence: Preparing for the Future of Audit. Int. J. Gov. Audit. 2020, 47, 32–35. Available online: https://intosaijournal.org/journal-entry/artificial-intelligence-preparing-for-the-future-of-audit/ (accessed on 8 February 2025).
  43. Garde Roca, J.A. ¿Pueden los algoritmos ser evaluados con rigor? Encuentros Multidiscip. 2023, 25, 25. Available online: http://www.encuentros-multidisciplinares.org/revista-73/juan-antonio-garde.pdf (accessed on 8 February 2025).
  44. Koshiyama, A.; Kazim, E.; Treleaven, P.; Rai, P.; Szpruch, L.; Pavey, G.; Ahamat, G.; Leutner, F.; Goebel, R.; Knight, A.; et al. Towards algorithm auditing: Managing legal, ethical and technological risks of AI, ML and associated algorithms. R. Soc. Open Sci. J. 2024, 11. Available online: https://royalsocietypublishing.org/doi/10.1098/rsos.230859 (accessed on 8 February 2025). [CrossRef]
  45. European Court of Auditors. Artificial Intelligence Initial Strategy and Deployment Roadmap 2024–2025; European Court of Auditors: Luxembourg, 2024. [Google Scholar]
  46. National Audit Office. Challenges in Using Data Across Government. UK National Audit Office. 2019. Available online: https://www.nao.org.uk/insights/challenges-in-using-data-across-government/ (accessed on 8 February 2025).
  47. National Audit Office. Use of Artificial Intelligence in Government. UK National Audit Office. 2024. Available online: https://www.nao.org.uk/wp-content/uploads/2024/03/use-of-artificial-intelligence-in-government-summary.pdf (accessed on 8 February 2025).
  48. European Court of Auditors. EU Artificial Intelligence Ambition: Stronger Governance and Increased, More Focused Investment Essential going Forward, Special Report 8/2024. 2024. Available online: https://www.eca.europa.eu/en/publications/SR-2024-08 (accessed on 8 February 2025).
Figure 1. Competences for AI in the public sector. Source: European Commission, 2024, Joint Research Centre [36].
Figure 1. Competences for AI in the public sector. Source: European Commission, 2024, Joint Research Centre [36].
World 06 00078 g001
Table 1. Application of machine learning tools in SAIs.
Table 1. Application of machine learning tools in SAIs.
Tool TypePotential Implementation
Clustering algorithms of similar data pointsExternal control could be useful for, for example, grouping expenditures by ministerial department or in identifying groups of spending programmes or similar projects, thus facilitating their comparison and evaluation.
Anomaly detection algorithms or deviations from the standardIts application to control makes it possible to detect budgetary irregularities or prioritise audits according to the areas in which the results deviate from the foreseeable patterns.
Artificial neural networksThese algorithms allow one to perform various tasks, including image and voice recognition and natural language processing (PLN), so in the field of external control they can be applied to process and analyse a large amount of unstructured information, such as texts and images, with the aim of extracting audit trails or relevant conclusions. For example, prediction models can be created from the information available from previous audits to detect unauthorised aid payments, unusual expenses or overpricing.
Decision treesWhen classifying data points based on a set of previously defined decision rules, it may be useful in external control to classify transactions as fraudulent or not, to classify suppliers as high or low risk or to predict the likelihood of fraud in a given area.
K closest neighboursThese are some of the simplest algorithms and easiest to implement and are used to classify some points (neighbours) starting from others, with k being the number of neighbours that are checked. They are used in image and video recognition applications, in stock exchange analyses, pattern recognition, intruder detection and in the construction of more complex algorithms.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Genaro-Moya, D.; López-Hernández, A.M.; Godz, M. Artificial Intelligence and Public Sector Auditing: Challenges and Opportunities for Supreme Audit Institutions. World 2025, 6, 78. https://doi.org/10.3390/world6020078

AMA Style

Genaro-Moya D, López-Hernández AM, Godz M. Artificial Intelligence and Public Sector Auditing: Challenges and Opportunities for Supreme Audit Institutions. World. 2025; 6(2):78. https://doi.org/10.3390/world6020078

Chicago/Turabian Style

Genaro-Moya, Dolores, Antonio Manuel López-Hernández, and Mariia Godz. 2025. "Artificial Intelligence and Public Sector Auditing: Challenges and Opportunities for Supreme Audit Institutions" World 6, no. 2: 78. https://doi.org/10.3390/world6020078

APA Style

Genaro-Moya, D., López-Hernández, A. M., & Godz, M. (2025). Artificial Intelligence and Public Sector Auditing: Challenges and Opportunities for Supreme Audit Institutions. World, 6(2), 78. https://doi.org/10.3390/world6020078

Article Metrics

Back to TopTop